Two years ago, brands were telling their agencies not to touch AI.
Now those same brands are asking their agencies: why aren't you using AI fast enough?
The panic phase on AI might well be over, but the compliance phase for this remarkable technology is now starting to kick in.
This was the consensus from a panel we convened with advertising lawyers from five jurisdictions around the globe to take stock of where AI meets the advertising sector.
The picture that emerged from the five jurisdictions (the US, Turkey, Brazil, the UAE and the UK) is one of rapid adoption colliding with a patchwork of rules, many of them still nascent, some of them half-formed, but all of them consequential, especially in the medium and long term.
If you work in advertising, marketing or the creative supply chain, here's what you need to keep on your radar.
Labelling and transparency
The first battleground is transparency.
When an ad uses AI, must someone say so and, if so, how?
In the EU, the Commission is drafting a code of practice that may require an AI icon on marketing material. The upshot? Watch this space.
The US, however, has gone further and faster in one specific area. New York state has passed a law which takes effect in June, requiring disclosure of AI-generated performers in advertisements. Brian G Murphy of Frankfurt Kurnit described the practical chaos this is already producing.
What about, for example, a synthetic hand holding a product? What about the thousands of AI-generated crowd members filling a stadium in the background of a print ad? The law offers few exceptions and no guidance beyond the word "conspicuous". As a result, several brands have already adopted blanket policies against using synthetic performers.
Even where hard rules don't yet exist, the expectation of transparency is growing. Advertisers should be thinking about how they might develop labelling protocols now.
Dead celebrities and digital doubles
If transparency is the first frontier, the second is identity: specifically, who controls it after death, and who controls it while you're still alive.
Fabio Pereira of Veirano Advogados described a striking Brazilian case. Volkswagen digitally resurrected the singer Elis Regina, who died in 1982, for a commercial in which she drives an old VW van alongside her daughter in a new one. Brazil's advertising self-regulatory body, CONAR, ruled in Volkswagen's favour: the heirs had given consent, one appeared in the ad, and audiences would reasonably understand the technology involved.
The decision was: you don't have to disclose that to the audience, because it's expected from them to be dealing with these kinds of things.
The legal position varies wildly by jurisdiction.
In Turkey, Hande Hançar of Gün + Partners noted that personality rights end at death, which leaves an open question about whether heirs can claim economic rights over a deceased person's likeness.
But the more commercially significant issue concerns the living.
New laws in New York, California and Illinois now require that contracts for digital replicas of performers include a "reasonably specific description" of what the replica will be used for. The broad, all-encompassing talent releases that advertising lawyers have relied on for decades won't cut it any more.
Copyright: too big to fail
The panellists noted that three IP questions are converging around AI:
- does copyright exist in AI-generated output?
- have AI companies infringed rights by scraping training data?
- and are AI tools producing infringing work as output?
Most jurisdictions agree on one point: a human must be involved in authorship for copyright to attach.
Turkey has just published a draft amendment proposing a common licensing unit so AI companies training on copyrighted works would pay a fee into it.
Brazil requires rights holder authorisation for training, though its 1998 copyright law is overdue for reform.
The US Copyright Office has indicated some training uses may qualify as fair use, but definitive answers remain elusive.
JJ Shaw noted that the UK adds a further complication. Section 9(3) of the Copyright, Designs and Patents Act 1988 suggests copyright can subsist in wholly computer-generated works, a provision that is largely untested and the subject of fierce debate. The government's major consultation on AI and copyright attracted 11,500 responses and ended with ministers going back to square one.
Data and targeting
AI's greatest advertising power may not lie in creation but in targeting: building audience profiles from browsing data, location signals and social media activity, then microtargeting ads with a precision that raises serious privacy concerns.
Brazil's data protection authority has already acted, ordering one Big Tech company to stop using personal data to train AI models until it improved transparency, gave users a genuine opt-out, and addressed the use of children's data.
Minal Sapra of Karawani & Co from the UAE offered a more nuanced perspective. The DIFC and Abu Dhabi Global Market free zones have incorporated AI provisions into their data protection rules, though these apply only within limited geographical areas. Beyond these zones, Minal noted that the regulatory landscape remains at an earlier stage of development
What good governance looks like
Amid this regulatory uncertainty, the most sophisticated advertisers are building internal guardrails rather than waiting for the law to catch up.
Hande Hançar was direct: human oversight remains "a must. An internal check system which always includes human oversight is the best governance."
On contracts, the old approach (ie, blanket indemnities pushed down to vendors) is breaking down. Brian G Murphy argued for something more realistic: understand what platforms your supply chain is using, ensure enterprise licenses are in place and require prompt documentation.
To conclude, the advertising industry has moved past whether to use AI. The questions that matter now are operational: how you label it, how you document it, how you contract for it, and who carries the risk when something goes wrong.
