Airtable didn't just add "AI features" to its platform — it rebuilt the platform around AI. The most consequential piece of that rebuild is the field agent: an AI-powered cell that thinks about your data in the background, reacts to changes automatically, and turns every record in your base into something a little bit smarter.
Unlike ChatGPT or Claude, which wait for a human to type a prompt, field agents run on their own. They trigger when a record changes, pull data from the web or from documents, and write the result back into the cell. By the time you open the record, the work is done.
This guide covers all five functional types of field agents you can build in Airtable, with real examples from our client work at Business Automated and honest notes on where each one shines versus where it struggles. Airtable also publishes an official Five Types of AI Agents You Can Build in Airtable webinar — the naming in this article aligns with that framing so you can cross-reference.
How Field Agents Actually Work
Before the five categories, a brief mental model. A field agent is an AI-powered field type inside Airtable. You configure it with:
- A source — which field or fields the agent reads from (text, attachments, URLs, linked records).
- A prompt or action — natural language instructions describing what the agent should do.
- An output format — single-select, multi-select, text, number, URL, image, or JSON.
When the source field changes, the agent recomputes the cell. The result is persistent, queryable, and behaves like any other Airtable field — you can filter on it, sort by it, use it in formulas, feed it to automations, and roll it up across linked records.
Field agents cost Airtable AI credits per run. Building apps with Omni, the conversational AI builder, is free; it's the agent executions and data analysis that consume credits. We'll come back to credit cost at the end of each category.
Category 1: Classification Agents
What they do: Read one or more fields and assign a record to a category.
Classification is the simplest and most reliable type of field agent. You give the AI a set of possible categories, point it at a field (or a combination of fields), and it picks the best match.
Example 1 — Lead qualification. A single-select field called "Lead Segment" that classifies every new lead into "Enterprise," "Mid-Market," "SMB," or "Not a Fit" based on the company name and job title. When a form submission creates a new record, the field agent reads the Company and Title fields, picks a segment, and writes it back into the cell. Sales sees the segment before they open the record.
Example 2 — Ticket triage. A multi-select field called "Topics" that tags inbound support tickets with the issues they cover: billing, account access, integration, bug, feature request. One ticket can get multiple tags. Routing rules downstream use those tags to assign the ticket to the right team.
Example 3 — Content moderation. A single-select field that flags user-generated content as "Safe," "Review Required," or "Reject" based on the text. The AI handles the obvious cases; humans handle the edge cases.
Why classification works well: The output is constrained (one of N labels), which is exactly what language models are best at. Error rates are low, results are easy to audit, and when the agent gets it wrong, a human can override the cell directly.
Credit cost: Low — usually the cheapest category because the prompt is short and the output is a single token.
Category 2: Extraction Agents
What they do: Pull structured data out of unstructured text.
Where classification picks from a fixed list, extraction reads free text and pulls out specific pieces of information. Think of it as regex, but smarter.
Example 1 — Contact details from an email signature. Feed the AI a signature block and have it return a JSON object with {name, title, company, phone, email, linkedin_url}. The agent handles messy formatting, multiple phone numbers, and weird abbreviations.
Example 2 — Pricing from a contract PDF. Point an extraction agent at an attached contract document and have it return the monthly fee, contract length, and renewal date. Instead of paralegals manually reading hundreds of contracts, the agent pre-fills the fields and flags anything it wasn't sure about.
Example 3 — Dates and deliverables from a project brief. When a brief arrives as a long text block, extract the start date, end date, key milestones, and deliverable count into separate fields. The agent bridges the gap between how humans write briefs and how Airtable stores data.
Why extraction is a game-changer: It eliminates the "someone has to type this into Airtable" step that kills adoption. Your team writes naturally — email, docs, PDFs — and the base fills itself in.
Credit cost: Medium — longer prompts and structured output cost more than classification, but the time savings usually dwarf the credit spend.
Category 3: Enrichment and Web Research Agents
What they do: Read from the web to fill in information that isn't in the base.
This is where field agents start to feel magical. Instead of just analyzing existing data, enrichment agents go out to the live internet and bring back facts, descriptions, logos, or anything else you can describe in a prompt.
Example 1 — Company enrichment from domain. Give the agent example.com and have it return a company description, headquarters location, approximate employee count, and main product category — all pulled from public web sources. This replaces expensive enrichment APIs like Clearbit or ZoomInfo for many use cases.
Example 2 — Logo and favicon fetching. Point the agent at a URL and have it return an image field populated with the company's logo. Great for CRM visual polish, directory sites, and vendor databases.
Example 3 — Competitor monitoring. Each week, an agent visits a list of competitor URLs, extracts their current pricing and feature list, and writes the result into a timestamped field. Trends are visible directly in the base without a separate scraping pipeline.
Why web research agents feel like cheating: They collapse what used to be a three-tool workflow (Airtable + scraper + enrichment API) into a single field. The tradeoff is that results can be inconsistent across runs — the web changes, and so does the agent's answer.
Credit cost: Higher — web research involves real fetches and larger context windows. Budget accordingly.
Category 4: Generation Agents
What they do: Create new content — text, images, translations — based on record data.
Generation is the category most people think of when they hear "AI in Airtable," but it's only one of five, and it isn't even the most useful for most businesses.
Example 1 — Personalized email drafts. A field that reads the contact's name, company, recent activity, and meeting notes, then generates a tailored follow-up email. Sales edits and sends from the field directly instead of staring at a blank page.
Example 2 — Product description rewriting. Given a dry product spec sheet, generate a marketing-friendly description in the brand's voice. One base can output descriptions in five languages at once using multiple generation fields with different prompts.
Example 3 — Image generation from a brief. For design and social teams: generate concept imagery from a text brief and attach the result directly to the record. Useful for mood boards, placeholder hero images, and A/B testing visual directions.
Example 4 — Translation. The simplest generation use case: source text in one language, target text in another. The agent handles tone, idioms, and domain-specific terminology better than most translation APIs.
Why generation requires more human oversight: Generated content is the category most likely to be wrong, verbose, or off-brand. Always treat generation agent output as a first draft — never publish it without a human review step.
Credit cost: Medium to high, especially for image generation, which can be the most expensive category per run.
Category 5: Document and Transcript Analysis Agents
What they do: Read long-form content (PDFs, audio transcripts, meeting notes) and extract insights.
The fifth category is technically a combination of extraction and classification, but it deserves its own bucket because the source material is fundamentally different. When your input is a two-hour call transcript or a fifty-page report, you need an agent that's designed for long context.
Example 1 — Meeting transcript summaries. Attach a Fathom or Otter transcript to a record and have the agent return a summary, action items, owners, and sentiment — each as separate fields you can filter and roll up.
Example 2 — Customer feedback themes. Feed hundreds of support transcripts, reviews, or survey responses through a single base. Field agents tag each one with themes, sentiment, and severity. Product teams get a real-time view of what customers are actually saying.
Example 3 — RFP response generation. When an RFP document lands, an agent reads it and extracts every question, the required format, and the deadline. A second generation agent drafts an initial answer for each question based on a knowledge base stored in another table.
Why document analysis is the biggest unlock for consulting and operations teams: It turns unstructured archives — Zoom recordings, Slack exports, email threads, PDFs — into queryable, reportable data without any manual transcription or tagging.
Credit cost: Highest, because the input can be tens of thousands of tokens. Use with intent — don't point it at every document you have, just the ones where the extracted data earns its keep.
How to Pick the Right Category for Your Use Case
When clients ask us where to start with Airtable AI, we ask one question: "What's the job you're trying to eliminate?"
- If someone spends their day manually tagging records, you want a classification agent.
- If someone retypes data from emails, PDFs, or forms into the base, you want an extraction agent.
- If someone Googles companies to fill in missing fields, you want an enrichment agent.
- If someone writes the same kind of content over and over from templates, you want a generation agent.
- If someone summarizes calls, documents, or feedback for internal reports, you want a document analysis agent.
Start with the most painful, most frequent manual job. That's where the AI spend pays back fastest.
Common Pitfalls
A few hard-won lessons from shipping these agents in real client bases:
- Prompt specificity matters more than model choice. Airtable picks the model for you. What you control is the prompt. Vague prompts → unreliable results.
- Always include an output format in the prompt. "Return a single word from this list: Enterprise, Mid-Market, SMB, Not a Fit." Not "categorize this lead."
- Set up a "review required" flag. Use a formula or a second agent to flag low-confidence results for human review instead of trusting every cell blindly.
- Watch your credits. A field agent on a table with 100,000 rows that triggers on every change can burn through a monthly allowance in an afternoon. Filter inputs, batch work, or run selectively.
- Version your prompts. Store the agent prompt in a dedicated text field next to the AI field. When you change the prompt, you'll want to know why the results changed.
What Comes After Field Agents
Field agents are one layer of Airtable's AI stack. Above them sit Airtable Omni — the conversational AI builder that can create whole tables, interfaces, and automations from a prompt — and the automation-based AI actions that fire as part of a workflow. We cover the relationship between all three in our Omni vs Cobuilder vs Field Agents comparison.
The best production Airtable systems combine all three layers: Omni to scaffold the base quickly, field agents to keep records enriched in real time, and automation AI actions to react to specific events.
Ready to Build?
Field agents are the single most productive change you can make to a mature Airtable base — and the single most dangerous one if you ship them without guardrails. At Business Automated, we design AI agents into client bases every week: picking the right category, writing the prompts, setting up review loops, and monitoring credit spend so clients get the upside without the sticker shock.
If you have a team that's drowning in manual data work and wants to see what Airtable AI can actually do in your base, get in touch. We'll audit where agents would pay back fastest and show you what a good implementation looks like.