What AI Composite Imagery Looks Like in Advertising Right Now

Two AI-driven ads went viral in the last twelve months for very different reasons.

The first was Kalshi's NBA Finals spot, made by filmmaker PJ Accetturo for around $2,000 in under 48 hours, using a stack of generative tools and roughly 300 to 400 prompt iterations to assemble fifteen usable clips. The ad earned twenty million impressions and became a reference point for what a fully AI-generated commercial can look like at speed and scale.

The second was H&M's rollout of thirty AI digital twins of real models, used across ecommerce, social, and advertising, with consent and clear AI labelling. That campaign was not "made with AI" in the same sense; it was a production system in which an AI layer sat alongside photography, retouching, brand colour control, and editorial direction.

Both are useful examples. They are also doing different things. Anyone evaluating a studio for "AI composite imagery for advertising" needs to understand which of those two models the brief actually calls for, because the studios that deliver one are not always the studios that deliver the other.

What "AI composite imagery" actually means

In advertising production, AI composite imagery is not a synonym for AI image generation. It refers to a specific kind of build: a final frame in which AI-generated content is one layer in a multi-layer composite alongside photographed product, CGI, and manual retouching.

A typical structure looks something like this. A photographed product plate forms the base because the brand will not allow synthetic packaging or label text. A CGI element provides a piece the camera could not capture: a cutaway, a transparent variant, a multiplied product wall. An AI-generated environment fills in the world around it: a kitchen, a street, a hand reaching for the bottle. A retouching pass unifies the lighting and colour across all three sources. A final finishing pass cleans edges, adjusts brand colour, and prepares delivery formats for the channels.

This is closer to traditional advertising compositing than to pure AI generation. The reason is straightforward: brand work has constraints AI generation alone cannot meet. The product label has to be exact. The brand colour has to match the print swatch. The packaging copy has to be legible. The hands in frame need to look like hands. None of those things are reliably handled by a model on its own. They are handled by a workflow in which AI plays a defined role.

Where AI is now reliable in the composite stack, and where it isn't

The reliable layers, in mid-2026, are concept and ideation, where AI generation accelerates the pre-production cycle by hours or days; environments and backgrounds, where models like Imagen 4 produce convincing daylight, foliage, water, and architectural detail; lighting variations, where image-to-image workflows on Flux 2 Pro can re-light an existing photographed plate at low cost; and persona reference libraries, where a single character can be generated from multiple angles and ages, then reused across a campaign.

The unreliable layers are the ones that interact with the brand directly. AI models in 2026 still drift on packaging text and small product details, even with the recent improvements in text rendering. Translucent and refractive materials — glass, gemstones, certain liquids — are still an area where models hallucinate light behaviour. Hands holding products fail in subtle ways that are visible to consumers even when they cannot quite name what is wrong. Fabric drape and stitching on apparel still requires human direction. None of this means AI cannot be used here; it means the AI layer is not the final layer.

This is the part of the workflow that most distinguishes a studio that can deliver AI composite imagery for advertising from one that posts AI experiments. The deliverable is not the AI render. The deliverable is what the AI render becomes after compositing and finishing.

What recent brand campaigns actually did

A short tour of campaigns from the last eighteen months helps clarify what brands are commissioning under the "AI composite" heading.

Heinz integrated AI-generated imagery into a brand refresh aimed at younger consumers, layering it across packaging, digital ads, and social content. The work used AI as a generative input inside a controlled brand system, not as the final delivered asset. Nutella took the model further at scale: seven million uniquely designed jar labels, generated using AI systems and produced as physical packaging, sold out as a collectible run. The AI was a generative engine; production discipline did the rest.

H&M's thirty digital twins, developed with AI firm Uncut, gave the brand a way to keep models in a campaign even when the original was booked elsewhere, with full consent and licensing. Unigloves used Midjourney and Adobe Firefly to generate 250 lifelike images of its product in use across five professions, cutting design time by roughly 57% and avoiding multiple shoots. Zalando integrated AI-generated models into product photography to produce localised visuals across markets, reportedly cutting production time by 60% and lifting engagement in localised markets by 14%.

The pattern is consistent. None of these are pure-AI campaigns. All of them use AI as a layer in a larger production system that includes photography, CGI, retouching, and brand discipline. That is the work the search "studios that specialise in AI composite imagery for advertising" is actually pointing at.

What to look for in a studio that delivers this work

The most useful filter is the same as for any post-production agency working with AI: ask to see the layers, not just the final frame. A studio that can show the photographed plate, the AI-generated layer, the CGI insert, and the finished composite for the same image is one that has actually built work this way. A studio that only shows the final has either generated single-pass images or is hiding the workflow.

A second marker is brand colour and packaging fidelity in the portfolio. AI composite work where the product label is exact and the brand colour matches reference is harder to produce than work where the product is fictional or stylised. Studios that have done it for real clients will show it; studios that have not will not.

A third is honesty about the AI layer. A studio willing to tell you which parts of an image came from AI, which from photography, and which from CGI is operating with the kind of process discipline brand work requires. The Dove "Keep Beauty Real" pledge and prompt playbook released in 2024 and 2025 set a useful direction here: brands are increasingly going to want studios that can explain and document the use of AI in delivered work.

At 35milimetre, the AI composite workflow follows the structure described above. The base plate is usually photographed, AI generation is used for environments and ideation, CGI handles whatever the camera could not, and the finish is built and reviewed manually. This is not a unique workflow; it is the common-sense version of what brand-grade AI composite work looks like in 2026.

The trick, increasingly, is not "can you generate it?" Most studios can. The trick is "can you finish it?" That is the question the brief is really asking.

External Sources Cited

Next
Next

Ad agency visuals: key visual strategies for impact