On Wednesday, the Digital Advertising Alliance announced it is launching a review with stakeholders to determine if new guidance is necessary regarding the collection and use of interest-based advertising data with the rise of artificial intelligence systems that can leverage such data.

The potential guidance would look to address how the DAA's Self-Regulatory Principles would be applied by advertisers to their business practices in the context of leveraging AI while using IBA data.

"As the advertising industry increasingly looks to AI tools and systems, it's vital that industry codes of conduct reflect that reality to serve companies and their consumers," DAA President and CEO Lou Mastria said in a statement. "This review will look at the steps companies can take to ensure they are providing appropriate information and control to consumers around the collection and use of IBA data by those systems, thus enabling responsible and sustainable consumer engagement and growth."

According to the DAA, key issues it will consider are identifying the relevant industry participants, understanding current and anticipated cases for IBA data by AI systems, reviewing consumer expectations around the collection of IBA data, and analyzing existing regulatory gaps and overlaps if it were to issue subsequent AI guidance.  

The review will be managed by the DAA's Principles and Communications Committee, which will convene stakeholder meetings with trade associations, advertisers, publishers, advertising technology providers and ad agencies.

"While the committee's work is just starting, it will likely include a review of the current state of play around AI collection and use of IBA data in the industry, including any current or anticipated use cases, before considering the details of any potential guidance," Mastria said in an email to the IAPP. "We don't want to get ahead of that process, so it's too early to share specific plans, outcomes or timelines."

In addition to the DAA's Self-Regulatory Principles, Loeb & Loeb Partner Jessica Lee, CIPP/E, CIPP/US, CIPM, said the AI revolution "presents unique challenges to complying with several privacy frameworks." She questioned what issues like consumer choice mean in an AI context and if it could involve "machine unlearning," for example, to ensure consumers' rights are respected in a potential corrective action.

Any forthcoming DAA guidance on the use of AI, Lee said, could prove similarly valuable to the DAA's Best Practices for the Application of the DAA Self-Regulatory Principles of Transparency and Control to Connected Devices.

"I see the value in similarly providing best practices for the application of the DAA's principles to the use of AI," Lee said in an email. "The U.S. is unlikely to have its own AI Act anytime soon, so if history repeats itself, we may see a number of organizations providing best practices and other frameworks that companies can look to demonstrate that they are using AI responsibly and those will become the industry norm until regulation steps in and takes the reins."

The multistakeholder approach to review the DAA's Self-Regulatory Principles with an eye toward how AI advancements will impact the adtech industry is a sound move, according to Greenberg Traurig Shareholder Darren Abernethy, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPM, CIPT, FIP, PLS. He said advancements in AI should not have necessitated this DAA review in and of themselves, however, responding to issues, such as regulators' enforcement efforts surrounding the use of generative AI that could "materially mislead a consumer as to a product’s characteristics" may require further DAA guidance.

"I agree with the approach announced by the DAA of inviting industry stakeholders to participate in the process of analyzing the existing self-regulatory principles, looking at the myriad uses — and potential abuses — of AI in advertising, and making a determination as to whether AI-related principles or guidance would be appropriate for this industry body to publish for potential implementation," Abernethy said in comments to the IAPP. "It is likely that any analysis will take into account regulators' statements and enforcement in relation to deceptive statements and practices around AI in marketing, and potentially clarify some guardrails there as to what constitutes (un)acceptable practices."

Abernethy said advertisers' mood regarding leveraging IBA data with AI can be described as "cautious enthusiasm" based on general conversations with clients.

"There is a recognition that the incorporation of AI into data-driven marketing practices is likely to have some macro effect on the labor force and current job roles within the ad industry, but this is balanced against the excitement and anticipation of using AI/ML for optimizing and refining campaigns, generating unique content, inferring hyper advanced analytics insights, realizing time savings and personalization," Abernethy said. "In relation to the impact on the use of IBA data, when used effectively, AI can help expand the insights derivable from both first- and third-party datasets, which creates more possibilities whether a company has begun shifting away from outsized reliance on third-party cookies or is doubling-down on it."

While it is too early to make any pronouncements of what guidance may be produced through the DAA's review, Lee said one result may be increased transparency requirements on the part of advertisers, whether it comes in the form of revised privacy notices, disclosures on websites or contained within the ads themselves. However, she said any new disclosure requirements should not be duplicative on top of any existing disclosures already being made.

"My concern is always whether we are helping consumers or confusing them," Lee said. "My hope is that any guidance that is created allows companies to build on what they have in place today, and focuses on addressing any net-new risks created by the use of AI, rather than adding new obligations that are already addressed elsewhere."

Alex LaCasse is a staff writer at the IAPP.