🌀 Procedurally Generated Ads
Advertising Variants on Netflix
For the Queens Gambit (and many other series), advertisers will produce over a dozen alternate thumbnails. They often display one to you based on your viewing history. Gone are the days of writing the perfect copy, or the perfect poster. The future is about templates and targeting. Analytics can determine how to render certain parts of an ad for each viewer. Here's an example (or maybe just a paranoia):
Modular Music, Modular Elements
I was watching some acoustic guitar covers on YouTube. Eventually, it cut to a Grammerly ad, which also had acoustic guitar in the background. The transition was so smooth, it seemed planned. Could this be one of many Grammerly ads? Maybe they made 10 versions of the same one, each with different background music (acoustic, electronic, rock, etc.), and then targeted different YouTube tags.
Instead of thinking along the lines of "variants," we can think of an advertisement as a singular object with modular components. The music, the background image, the font, the characters - they can all be swapped in and out based on the context.
Procedurally Generated Ads
The fusion of analytics, psychographics, and advertising is already starting to spook us. (And AI is barely in the mix yet.) We're starting to feel the danger of "archetypes" being targeted and manipulated to act. But what if advertisers could get even more fine grained than: "Conservative boomer who likes conspiracy theories." For any given campaign, what if everyone got a personalized ad?
This becomes more and more possible as artificial intelligence becomes generative. The concept of "procedural generation," which is usually found in video games, could spill into advertising. A procedurally generated world is one that isn't built in advance. For example, there could be a formula that randomly spawns trees, rocks, and hillsides. Only when a character steps forward do the details in their surroundings come to life.
This is done so that environments always feel new and different. In advertising, it will be done so that an ad always feels relevant. Analytics can be used to gather source material for an AI to compose from.
- Your recent vacation to Vermont
- The breed of your pet dog
- The celebrity you've been binging on YouTube
- The music you've been listening to
It's not that the source material will be directly used in the ad (at least not to start), but it could help determine how the template is expressed. A modular ad with just 4 dynamic components, each with 50 expressions, comes out to 6.25 million variants. (50 backgrounds, 50 dog types, 50 actors, 50 songs, and every combination.) Your data helps determine how each of the 4 components are expressed.
Regardless of what's being sold, it's starting from a place of unconscious resonance. The act of crafting ads could radically change. The goal might eventually be to craft a non-specific "templates" that can be re-purposed for the max amount of custom scenarios.
AI-Ready Content Libraries
Using green-screens and random actors, millions of variants could be spawned. We're only in the early non-scalable days of deep fakes right now, but this is just one way that deep fakes could be commercialized and adopted by big studios.
You might see Rachel McAdams selling skincare products. You won't know if she was ever in the studio, or they just used an image library and deep fakes to map her onto a generic actress. You also won't know if any one else besides you has seen Rachel McAdams in this specific ad (maybe you watched Mean Girls one too many times.)
We're already seeing how it's possible to map faces onto videos (Tom Cruise deep fakes). It's not hard to imagine a future where we don't bother real actors anymore (they can collect royalties and capitalize on their brand, with no active effort). Advertising companies will have source data of thousands of famous figures, backgrounds, dialects, conversation templates, fonts, etc.
It currently takes several hours to render a deep-fake, but what happens when it gets quick and cheap? Maybe every advertisement will create a custom iteration for every citizen in a companies database. It's there, waiting for you on a server, just in case you happen to click into the right part of town.
World of Mirrors
Advertising aside, it's shocking to think of the role AI might have on media. Assuming your preferences can be encoded by an algorithm, a script could generate and curate libraries of original content, specifically designed for your taste. There will be a flood of content, never been conceived by humans, but waiting for you to be amazed by:
- New albums based on your music taste
- Full novels based on your favorite authors
- Compressed versions of non-fiction that link in to your interests
- Procedurally generated sitcoms
We might get to a point where citizens rush to populate a neural network with all of their data, because they see the value of the returns. It's already here in a non-generative version. The more you use Spotify, the better your Discover Weekly gets. When this becomes generative, it could lead to a "grand illusion" type of scenario. (Everyone will have a Beatles album written for them.)
Terence McKenna hints to "the end of linear history," and McLuhan talks about a similar concept of "electronic tribalism." Both indicate a shift from a shared consensus reality, to a new landscape of fractured tribes, each with their own version of the truth. AI could lead to the most extreme expression of this fracturing: a custom reality for every individual.
Guidelines to Avoid Hell
It sounds creepy and dystopian, but the artifacts that AI produces for us (guided by us), could actually elicit feelings of satisfaction, transformation, and deep mystery. Procedurally-generated content could be something people place trust, or even religious significance, into.
Where there's potential for good, there's potential for mega-corporations to abuse. In order to avoid some hell-scape where TikTok generates your own distorted version of reality based on the whims of advertisers, it's important that people:
- Retain control over their training data (the ability to revoke it from an algorithm)
- Have visibility to the actors and intentions behind generation algorithms
- Transparency on the veracity of what's being generated
- Receive royalties if their data is monetized elsewhere
Dean's List
Join to get new posts in your inbox.