Watch our latest video available on Youtube.
Tutorials/Tutorial

AI Product Packshots with Airtable — No Designer Needed

Professional ecommerce product photography used to mean expensive shoots and long lead times. This tutorial shows you how to generate polished, on-brand packshots from simple base images using OpenAI image generation, Airtable, and Make — in minutes, not days.

YouTubeBeginner to Intermediate11 min readApr 20, 2026

Hiring a photographer, booking a studio, waiting for retouching, and paying licensing fees — traditional ecommerce product photography is slow and expensive. For a small DTC brand or a growing Shopify store, that cost adds up fast, and catalog updates become a bottleneck every time you launch a new colorway or seasonal variant.

There is a faster path. By combining Airtable as a product database with OpenAI's image generation model and Make as the automation layer, you can turn a plain product photo into a professional packshot in any environment you choose — graffiti urban backdrops, lifestyle scenes, studio corner displays — all without leaving a browser tab.

This tutorial walks through the exact system shown in the video below, built and tested by the team at Business Automated. It is aimed at ecommerce operators, marketing managers, and Airtable consultants who want to produce more visual content without scaling headcount.

Video Tutorial

How AI Image Models Generate Product Packshots

OpenAI's image generation model — used here through the GPT-Image-1 API — does not simply swap backgrounds. It reads the full product image, understands the object's shape and texture, and then re-renders the product inside a new scene that matches your written prompt. When you also supply a reference image, the model uses that composition as a visual guide, matching framing, lighting angle, and color palette.

This two-input approach — a base product image plus a reference image — is what makes the output feel coherent rather than obviously artificial. The model blends the product into the environment rather than cutting and pasting it. The result is a packshot that looks like it was shot on location, even when the "location" only exists as a text description.

There are genuine limitations to be aware of. Fine text and logos embedded in products can be rendered imperfectly, particularly at smaller sizes. Running an edit pass with a targeted correction prompt (for example, "improve the brand logo legibility") usually closes the gap without needing to start from scratch. For context on where AI fits in a broader automation strategy, see our guide to AI automation for business.

Why Airtable Is the Ideal Workspace

Airtable sits at the center of this system for one practical reason: it combines a relational database with a visual interface and a native automation layer, all in one tool. You can store your product catalog, your prompt library, and your generated image archive in a single base — and you can build the buttons and status-driven triggers that kick off Make scenarios without writing a line of code.

This is the same architectural principle behind our Frame.io, Make, and Airtable content production workflow, where Airtable acts as the single source of truth while external APIs handle the heavy lifting. The pattern scales: once you understand how status fields drive automation, you can apply it to any tool in your stack. For more patterns, see our Airtable automation examples and our overview of types of Airtable AI agents.

If you are new to Airtable as a platform, the short version is this: think of it as a spreadsheet that knows how to talk to APIs.

Step-by-Step: Base Setup, Prompts, and Automations

The Airtable base for this workflow has three tables.

Products table holds one record per SKU. Each record contains the product name, the base image or images (uploaded as an attachment), and two operational fields: a linked field for selected prompts and a single-select Status field with values like "Trigger Images" and "Done."

Prompts table stores reusable generation instructions. Each record has a short name (for example, "Urban Graffiti Background"), a long-text field with the full prompt description, and an optional attachment field for a reference image. Separating prompts from products means you write a prompt once and apply it to your entire catalog.

Images table is where finished packshots land. Each record links back to its parent product, stores the generated image as an attachment, and includes a field for an optional correction prompt that drives the edit automation.

The Airtable interface — the front-end layer built on top of the base — exposes this data through two views. In the product grid view, you select one or more product records, choose one or more prompts from a linked-field picker, and press "Trigger Images." A status change from the default to "Trigger Images" fires a Make webhook. In the individual record view, you can do the same thing from inside a single product's expanded record.

For the edit flow, a separate button inside each image record reads the correction prompt you type, sends the existing image plus the correction instruction to the OpenAI edit endpoint, and writes the refined packshot back to Airtable — prepended in front of the previous version so you always have a history.

The Make scenarios for both generation and editing are available to download from the video description linked above.

Prompt Engineering for Consistent Brand Shots

The quality of your output is determined almost entirely by the quality of your prompts. Here is a prompt structure that reliably produces usable packshots:

Create a lifestyle product image. Place the product in an urban environment
with a graffiti-covered concrete wall as the background. The lighting should
be natural afternoon light coming from the left. Maintain the product's
original colors exactly. The product should occupy roughly 60% of the frame,
centered horizontally and slightly below center vertically. Do not alter
the product itself — only change the environment around it.

A few principles to keep in mind:

  • Describe what to preserve, not just what to change. Telling the model to "maintain original colors" and "do not alter the product" reduces unwanted modifications.
  • Specify composition explicitly. "Occupying 60% of the frame" gives the model a concrete instruction instead of leaving composition to chance.
  • Use reference images for style consistency. If you have a packshot that captures the exact lighting and framing you want, upload it to the Prompts table as a reference. The model will treat it as a style target rather than a content source.
  • Keep correction prompts surgical. When editing a generated image, narrow the instruction to the specific problem: "Improve the logo legibility on the side of the shoe" rather than a broad re-description of the whole scene.

You can store multiple prompt variants in the Prompts table — lifestyle, studio, flat-lay, contextual use — and apply them selectively per product or run them all at once to generate a full set of assets in a single batch.

Handling Base Images and Scenarios

When Make receives the webhook trigger, the generation scenario follows this sequence:

  1. Retrieve the product record from Airtable using the record ID passed by the webhook.
  2. Download all base images attached to that product record.
  3. Aggregate the base images into a single data structure for the OpenAI API.
  4. Iterate over each selected prompt. For each prompt, retrieve the prompt text and download any reference images attached to that prompt record.
  5. Call the OpenAI edit images endpoint with the combined array of base images and reference images, the prompt text, and the desired output size and quality settings.
  6. Receive the generated image as a binary buffer.
  7. Upload the image to Google Drive and set the file to publicly accessible so Airtable can read the URL.
  8. Create a new record in the Images table linking back to the product.
  9. Update the product record status to "Done" and clear the selected prompts field.

The edit scenario is a shorter version of the same logic: it reads the existing image from the Images record, downloads it, calls the OpenAI edit endpoint with the correction prompt, uploads the result to Google Drive, and writes the new image back to the Images record prepended in front of the previous versions.

One notable design choice: the generation scenario uses the OpenAI "edit images" endpoint even for net-new generations. This is because the edit endpoint accepts an input image array, which allows you to pass both the product base image and the reference image in a single API call — something the standard generation endpoint does not support directly.

Business Use Cases

DTC brands on Shopify. A small brand selling footwear, apparel, or accessories can maintain a library of prompts — seasonal backgrounds, lifestyle contexts, studio styles — and regenerate its entire catalog in a new visual treatment whenever the season changes. What previously required a full shoot can now be executed in an afternoon. Explore how this fits into a broader ecommerce automation strategy or see our Shopify tool guide.

Marketplace sellers. Amazon, Etsy, and other marketplace platforms have strict image requirements but allow multiple secondary images. AI-generated lifestyle and contextual packshots fill those secondary slots with compelling visuals at a fraction of the cost of traditional photography.

Marketing teams running paid social. Ad creative requires constant variation — different backgrounds, different moods, different aspect ratios for different placements. A system like this lets a marketing team generate dozens of creative variants from a single product photo and test them without waiting on a designer. Pair this with Make automation scheduling and you have a full creative pipeline.

Creative and automation agencies. If you manage visual content for multiple clients, this base structure is easily cloneable. Each client gets their own Products and Prompts tables. You write the prompts once, the client uploads their base images, and you deliver packshot batches on demand. Our Make automation agency team uses this type of system to run high-volume content production for ecommerce clients. See also ChatGPT as a tool in your stack for how AI assists across the workflow.

When to Hire Help

This tutorial covers everything you need to build the system yourself. That said, some situations benefit from outside expertise:

  • Your product catalog has hundreds of SKUs and you need batch processing logic, error handling, and retry flows built into the Make scenarios from day one.
  • You want to integrate the packshot system with your existing Shopify product update workflow, so approved images are pushed directly to listings.
  • Your team needs custom prompt templates tuned specifically to your brand guidelines and product category.
  • You want to add a client-facing approval step — for example, using an Airtable interface or a Softr portal — before images go live.

If any of those describe your situation, our Make automation agency and Airtable consulting team can scope and build a production-ready version of this system for you.

Next Steps

Once your packshot generation workflow is running, there are natural extensions worth exploring:

  • Connect generated images to your Shopify product listings automatically using Make — when a packshot status changes to "Approved," push it to the product's media gallery via the Shopify API.
  • Add a scoring or selection step inside Airtable where your team rates each generated image before it moves to an approved state.
  • Explore other Airtable automation examples to see how the same status-field-as-trigger pattern applies to content calendars, client onboarding, and reporting workflows.
  • Read our guide to AI automation for business to understand where image generation fits in a broader automation roadmap.

The core insight here is straightforward: AI image generation is most useful when it is embedded in a system — not used as a standalone tool. Airtable gives you the structure, Make gives you the connectivity, and OpenAI gives you the creative engine. Together, they replace a significant portion of the time and cost that goes into keeping a product catalog visually fresh.

Frequently Asked Questions

Common questions about this tutorial.

Ready to Transform Your Business Operations?

Join 100+ companies that have automated their way to success. Get started today and see the difference.