How Coca-Cola's AI Holiday Ads Changed What Brands Want From a Post-Production Partner
Coca-Cola ran two AI-generated holiday ads. The first, in 2024, was a soft re-tread of the classic "Holidays Are Coming" spot, generated with AI tools and poorly received. The second, in late 2025, was produced with Silverside AI and Secret Level and was widely panned across Campaign, Marketing Dive, MediaPost and consumer channels as "soulless", "digital slop" and, less charitably, "a commercial made by a computer that has never seen a Christmas". It became shorthand, across the industry, for why brands still need a post-production studio on AI work.
This isn't about roasting Coca-Cola. It's about what broke in the pipeline, what a trained eye would have caught, and what's changed in how brands brief AI-involved work since.
Why did viewers recoil from the ad?
Strip away the abstract takes and the actual problems are technical. The trucks were inconsistent across shots; the same truck from two angles looked like two different trucks. The animals were uncanny; polar bears with faces slightly off the reference, wolves with too many paws in at least one cut. Light continuity across scenes broke; snow lit differently from shot to shot, shadows falling against the direction of the sun. Motion didn't quite match the weight of the objects; trucks that glided rather than drove. And the human faces, where they appeared, sat in the "not quite right" zone; the uncanny valley that makes viewers feel vaguely uncomfortable without being able to name why.
Individually, any one of these can be forgiven. Stacked together, they tell the viewer something is wrong, even if they don't know what. That's the diagnosis.
What would a trained eye have caught?
Every item above is the kind of note a post-production supervisor would flag in dailies on a traditional shoot. Vehicle continuity; notes go back. Animal references; back. Light continuity; that's the whole job of a continuity supervisor. Motion weight; that's animation supervision. Faces in the uncanny valley; composite them out, replace with plates, or adjust them in post.
The ad didn't fail because AI can't produce Christmas imagery. It failed because no one between the model and the broadcast had a quality bar they could enforce. The craft muscle that exists on a traditional production just wasn't in the room.
What changed in the briefs after the backlash?
Since the 2025 ad ran and the reaction landed, the briefs coming into us have shifted in specific ways. A few patterns worth naming:
- Clients are asking who runs quality control on the generated material before it leaves the studio. They want a named supervisor on the file, not "we'll check it".
- Continuity is now a separate line item in briefs. Ten frames of the same scene, the same character, the same light; treated as a discipline, not an afterthought.
- "Human in the loop" has stopped being a buzz phrase and become a specification. Clients are asking at what points a person reviews, what they're looking for, and whether the review can be documented.
- Reference boards are coming in fuller. Brand teams have learned that "make it feel like Christmas" is not a brief a model can execute to a professional bar. Specific references, character sheets, and visual direction boards are now standard.
The net effect: AI briefs in 2026 look more like traditional production briefs than they did in 2024. That's healthy.
What belongs in a post-production quality gate?
Based on what we now run on AI-involved work, the gate has four stages. Concept review; does the generated direction hold together at the compositional level? Craft review; are there specific artefacts; hands, faces, reflections, label geometry, anatomy of animals and vehicles; that need manual fixing? Continuity review; does the set of images hold together as a campaign, with consistent light, colour and subject? Brand review; does the output sit within the brand's visual guardrails, or has it drifted into a look that belongs to another brand?
Each gate should have a named owner and a specific output. Without that, the review collapses into "everyone signs off on everything", which is what produced the Coca-Cola spot.
Why post-production studios still matter
The cynical read on AI advertising, through 2024, was that post-production studios would be displaced by prompts. The Coca-Cola episode was the moment that argument lost. What the industry learned, fast, is that the value of a post-production team was never in the specific tool; it was in the accumulated craft judgement about where an image breaks and why. A model can generate an image. It can't sit in dailies and say "the light on frame seven doesn't match frame five, fix it before the client sees it."
That judgement sits in the same team that used to retouch photographs. The tools changed; the work didn't.
How 35milimetre applies this
On AI-involved campaigns we run, the quality gate is part of the delivery spec, not an informal thing. We break a campaign into a shot list the same way we would for a photographic shoot; character sheets, reference boards, continuity plan. We run the generation in passes rather than one-shot. We composite, retouch and colour-grade to unify the set. And we flag the frames the client should look at hardest, rather than signing off on everything as equivalent. It takes longer than "generate and deliver". It also doesn't end up in a Marketing Dive takedown.
Closing takeaway
Coca-Cola's AI holiday ads ended up being an expensive lesson for the industry; a useful one, paid for by somebody else. The brands moving quickest on AI in 2026 are the ones who treated that episode as a spec change, not a reason to avoid AI altogether.
If you're planning an AI-involved campaign and want the quality gate designed upfront, we're happy to talk through how we structure it.