Preliminary enforcement actions around artificial intelligence companies sent waves around the nascent industry, but stakeholders are just beginning to grapple with what regulation will look like in the future.

That was the theme of two panels focused on regulators' views on AI at the IAPP Global Privacy Summit 2024, held in Washington, D.C. While speakers said there are already clear areas where AI is regulated, how to address those overlapping areas, find global agreement and gather the resources to enforce policies poses a challenge to stakeholders both in the private and public spheres.

Some of the conversation between regulators at one breakout session focused on the EU General Data Protection Regulation, which is the template for other countries' data protection regulatory schemes and preliminary AI guardrails. The GDPR has a heavy focus on personal data processing — a category that applies in the majority of AI use cases, said Gintare Pazereckaite, a legal officer with the European Data Protection Board.

Pazereckaite said the influence can be seen in enforcement cases addressing how automated decision-making technology can be used in credit scoring, education, distributing social benefits and and finding tax fraud, she said. Provisions in the GDPR also require a legal basis for collecting data — a list she indicated "is not very long in the context of AI."

The newly minted EU AI Act is likely to change much of how the industry is regulated. However, regulators will continue to look to the GDPR to deal with many AI privacy issues. Pazereckaite said the GDPR will still apply to what rights a person has when their information is included in large datasets, the use cases for that data and the lawfulness of collecting that data.

"The balancing is not an easy exercise and there are very strict requirements to rely on a legal basis," Pazereckaite added. "In some cases, not all controllers will be able to pass the test."  

The need to have a legal basis to collect data is going to be especially important for AI creators. Google Senior Director, International Privacy Legal and Consumer Protection William Malcolm said that's because it is not possible for a model to unlearn information once it has been trained on it.

Google parent company Alphabet and its subsidiaries were sued last year for using internet data to train its AI products. The suit was called baseless by Google's counsel, who argued the use of public data has been part of the company's AI policies for years, CNN reported.

Malcolm said it is up to companies to have transparent, clear reasons for why it uses personal data, how it is used and to give users control of their data. Google did so when it modified its Gemini product to have updated privacy filters and explained how the data is collected and warned the generative system can sometimes hallucinate.

"All of that is to say, there are practical solutions we can put in place to meet GDPR standards today," he said.

And while the GDPR has set some global standards and the AI Act has the potential to do the same, Mastercard Senior Managing Counsel for Privacy, Data Protection and AI Jasmien César, CIPT, said the two regulations are different and not all countries follow the GDPR exactly.

"I think it's really key here to advocate for global convergence on standards for local interpretation between different regulators," she said. "And I think it is absolutely key for industry, big or small, to engage with regulators."

Rules to limit scraping

How data scraping will be tackled by regulators is also still being worked out, panelists said in a separate breakout session on the topic. The panel focused on the aftereffects of the Joint Statement on Data Scraping and the Protection of Privacy issued by 12 data protection authorities in August 2023.

The joint advisory made it clear publicly available information still falls under data and privacy protection laws and social media companies should take steps to protect it.

Such practices gained renewed scrutiny, as several high-profile AI companies have revealed their models are trained with massive troves of data scraped from the internet. Some of those companies are facing legal action.

The challenge regulators face is to find a way to allow AI to advance and innovate without wounding people's privacy rights. U.K. Information Commissioner's Office Director of Technology and Innovation Stephen Almond said, while there is a public interest to help AI advance, there are also concerns about how data is irretrievably interwoven into an AI module once it is trained.

The ICO issued a preliminary warning to Snapchat last year, charging the company did not conduct an adequate risk assessment before launching its "My AI" chatbot. It is also launching consultations on how data protection laws should be interpreted around that issue while exploring how companies define what data is considered interesting enough to warrant training a model on.

"I think these are all really big questions for us to explore, but some of them are still quite open," Almond said.

Australian Privacy Commissioner Carly Kind said regulators face a semantics challenge as well. For example, Kind pointed to some companies communicating they will prevent unpermitted data scraping under their terms and conditions, but what that means depends on the local landscape.

"I think the highest standard is certainly unlawful rather than unauthorized," Kind said. "And that will play out in different jurisdictions."