• SelfStack
  • Posts
  • Use AI to easily animate all of your static ads

Use AI to easily animate all of your static ads

Static-to-video in Midjourney, product insertion with Flux Kontext, and a reminder that no model can write a line like “your Grandma would croak.”

Table of Contents

Preamble: Don’t Revert to the Mean With AI

I know this is a newsletter about AI and performance creative, but I need to say the quiet part out loud: stop using AI to the point where you revert to the mean. And yes, believe it or not, I actually write this newsletter the old-fashioned way, with a keyboard and a blank screen (that’s how it all started, yeah?).

We, as 21st-century, socially-addicted humans have been outsourcing and screening our thought processes by relying on reviews, TikTok thoughts, IG influencers etc., and yes, decision fatigue is a thing (we consciously or unconsciously make 35k decisions a day… I mean gee golly Squidward), but man, while AI can help amplify individual thought and find trends in data, it lacks any soul.

Use it to get 80% of the way there by feeding the beast your data and have it ID trends, and have it help you flesh out a concept—*or go from 0 to 0.75 with a concept—* but please do not outsource your thinking to it.

Remember when we all said copywriters are toast when ChatGPT first came out?

I would argue that copywriters are actually the MOST important part of the creative process now!

AI can allow you to figure out WHO (persona) to target, WHAT (angle) to target them with, and WHY (motivation) they would purchase, but they are not going to write something like “Indian meals so authentic, your Grandma would croak over them.”

As Oscar Wilde once said, “Be yourself, Sam Altman is already taken,” or something to that effect.

And on that note, let’s talk about how AI can help you AMPLIFY your individuality and thought processes.

*Intentional em dashes!

You should be animating all your existing static ads with Midjourney: here’s how.

A static ad and a video ad are treated in the Facebook auction.

In a nutshell, Meta’s auction system doesn’t say “video beats static” or vice versa. Instead, whichever format has the better predicted performance for the specific audience, placement, and objective will get favored delivery and potentially lower costs.

That’s why testing both formats in the same ad set is key; the algorithm will naturally route spend toward the perceived winner.

So why wouldn’t you animate every single one of your statics as MP4s within Midjourney? It’s such an easy way to get incremental lift and two shots on goal with the same concept.

Original Static

Animated Version

Here is a video outlining my process:

TL;DW:
AI gets you 80% of the way there on creative ideas, but animate every static you make for extra performance.

  1. Tools: GPT-5 (or Claude) to identify what to animate + Midjourney v7 for the actual animation.

  2. Process:
     • Upload your static → ask GPT what elements to animate (text, icons, shapes, products).
     • Pick one → get a Midjourney prompt that focuses only on that element.
     • Upload static to Midjourney → generate 4 animated options → pick the best.

  3. Why bother: Videos and GIFs often get priority in the ad auction vs. statics. Same concept, but animated, = more reach and better odds of winning impressions.

The best image product insertion model that no one seems to be talking about

The Flux Kontext Max model is easily the best model in terms of product insertion. It gets your label right the vast majority of the time, and knowing that the world it your oyster.

The key to success: you NEED to be prompting with JSON prompts, otherwise it will hallucinate too much.

How do you get JSON prompts? Same way you get all your prompts. Just ask GPT-5 or Claude to give you your prompt in JSON format.

Input

Output

Prompt:

{
  "prompt": "Photo-realistic image of the provided beer can sitting on the outer hull of a spacecraft in orbit. The can is standard 12 oz size, matching the exact design from the reference image, with accurate proportions and no distortion. The spacecraft surface has metallic panels, bolts, and subtle reflections from Earth and sunlight. Background shows the curve of Earth, atmosphere glow, and distant stars.",
  "style": "ultra-realistic, high-resolution",
  "references": [
    {
      "image": "suspended-in-a-sunbeam-pils-best-na-beer-33401807503469 (1).webp",
      "use_for": "exact beer can design and proportions"
    }
  ]
}

We use the Flux Kontext model on fal.ai, and honestly most of the models we like to use—aside from Midjourney are on there. It also has great API access if you want to get cheeky with n8n or Zapier (I’d recommend the former… open source and so much more adaptable).

I am obviously not sponsored, but having a one stop shop is great.

Okay, that’s all for now.

We are all overcoming hand foot mouth disease over here in Brooklyn.

The only one who has yet to get it is Linus, our dog, and we can’t have him having paw paw mouth.

If you have questions, feel free to message me.

Have a great weekend!

Will