API loops and programmatic AI
Move beyond copy-paste. Learn to call AI from code and process hundreds or thousands of records automatically.
This pathway is about building the fundamental skill that helps get to serious productivity gains: calling AI programmatically.
When you have 500 records to classify, 300 documents to summarise, or 1,000 rows to enrich, copy-pasting into Claude is... a nightmare. At that scale you need to write code that loops through your data, calls the AI API for each item, and saves the results.
The pathway starts with the simplest possible pattern - a Python script that processes a CSV file row by row. Then we move to document processing and more sophisticated classification tasks. By the end, you'll hopefully be comfortable writing scripts that process any amount of data, and certainly have the foundations to be able to go find more information about your particular challenge.
You do not need to be a developer, but you'll need to get comfortable with the idea of running Python scripts and using Google Colab or similar environments.
Before you start
- •Basic Python skills (variables, loops, functions)
- •Familiarity with Google Colab or Jupyter notebooks
- •An OpenAI or Anthropic API key with some credit
- •Some data you want to process (CSV files, documents)
What you will achieve
- ✓Process hundreds or thousands of records using AI
- ✓Write reusable scripts for common data tasks
- ✓Understand the costs and trade-offs of API-based AI
- ✓Be ready to build more sophisticated AI applications
Learning path
The basic loop
The fundamental pattern: read CSV, call API, save results. This is the foundation for everything else.
Estimated time: 2-3 hours
After this stage, you will be able to:
- ✓Call OpenAI or Claude API from Python
- ✓Process a CSV file row by row
- ✓Handle API errors and rate limits gracefully
- ✓Save enriched data back to CSV
Document processing
Process PDFs, Word documents, and other files in bulk.
Estimated time: 3-4 hours
After this stage, you will be able to:
- ✓Extract text from PDFs and Word documents
- ✓Process document folders automatically
- ✓Extract structured data from unstructured documents
Classification at scale
Categorise, tag, and route records using AI - the most common production use case.
Estimated time: 3-4 hours
After this stage, you will be able to:
- ✓Build classification pipelines that run unattended
- ✓Achieve consistent categorisation across large datasets
- ✓Reduce manual data cleaning work significantly
Production patterns
Make your scripts robust enough for regular use.
Estimated time: 2-3 hours
After this stage, you will be able to:
- ✓Build quality checks into your pipelines
- ✓Handle edge cases and failures gracefully
- ✓Monitor AI outputs for drift or problems







