The uncertainty regarding the best way to notify when artificial intelligence-generated content is in advertising is just one of the challenges facing marketers when EU AI Act is scheduled to come online.

Kai Zenner, the head of staff for German Member of European Parliament Axel Voss, told attendees at the IAPP AI Governance Global 2024 issues around advertising and marketing were one of areas if the AI Act which were left broad to make them "future proof" by negotiators. He said further definition in areas that may impact the advertising space is likely to come via the codes of conduct as the act is implemented rather than through guidelines, but such information will probably not be available until 2026.

"Back then during our negotiations, even academics couldn't agree if water marking is the solution for artificially generated content," he said.

Zenner's remarks underscore the complicated relationship advertising has with AI and the care businesses will have to take to stay in compliance. The act is largely focused on putting guardrails around "high-risk systems," or those which could significantly impact a person's rights. How to carry out various aspects of the law will be determined by the European Commission as well as the European AI Office.

Nathalie Laneret, CIPP/E, CIPM, the vice president of government affairs and public policy for the advertising company Criteo, noted advertising and marketing are not listed as high-risk, except when it comes to targeted job advertisements under the employment category. She said that is in line with prior standards set by the EU General Data Protection Regulation and said most day-to-day marketing operations will likely not be affected.

"I think that it is obvious from those two sets of laws that what we are talking about is a very high standard," she said.

But the interlocking of various laws also leaves open the possibility for confusion, Zenner countered. He noted something like dark patterns – where people are manipulated or tricked into making a negative decision through an interface or algorithm – was initially considered as a use case mentioned in the AI Act, but was discarded because such uses are already forbidden under the Digital Services Act. A similar concern was raised about political advertisements, but was also dropped with the advent of a more targeted regulation.

"It's really a shot in the dark so far and maybe we will actually see what AI systems in the end are affected," he said.

But also, if regulators had not pushed to finish the act, "Maybe we would have found more use cases that are under the radar," Zenner added.

The act raises other questions for advertisers, such as how regulators will reconcile how the GDPR limits personal data and how that information is judged within the act's risk hierarchy, Laneret said. She hoped discussions around the AI in the coming months would provide more of a compass to defining risk in the digital world overall.

But Sachiko Scheuing, CIPP/E, the European privacy officer for marketing company Acxiom, said there are steps companies can take to prepare for further guidance. She said many of the requirements for AI governance within the act are like those required for data protection and are replicable.

She also said it should not be surprising if further advertising-focused guidance comes down even if most applications are considered low-risk, given the prevalence of AI in the marketing world. A company survey from Acxiom found 47% of marketers are using AI in some function, she said, and she urged listeners to begin the documentation process of how they plan to comply with the regulation, even if future guidance does not focus on their industry too much.

"I think it is inevitable for some sort of government framework to be put in," she said. "But (the risk) in advertising is only, shall we show this advertisement or not? And it has got nothing to do with real harm to the individual. So I think it is very much off the radar at the moment."

Caitlin Andrews is a staff writer for the IAPP.