ANALYSISMEMBER

A deep dive into Europe's approach on personal data processing in AI systems

Published
Subscribe to IAPP Newsletters

Contributors:

Henrique Fabretti Moraes

CIPP/E, CIPM, CIPT, CDPO/BR, FIP

Country Leader, Brazil, IAPP; Managing Partner

Opice Blum

Helena Dominguez Bianchi

CIPP/E

Advogada

Opice Blum, Bruno, Abrusio e Vainzof Advogados Associados

The field of artificial intelligence, particularly the rise of large language models, presents a dual challenge of unprecedented opportunities intertwined with complex data protection concerns.

European regulators at the forefront of data protection are keen to ensure AI innovation is in line with the principles enshrined in the EU General Data Protection Regulation.

In the Baden-Württemberg Commissioner for Data Protection and Freedom of Information's recent publication "Legal bases in data protection in the use of artificial intelligence," the complexities of processing personal data within AI systems are  broken down, raising critical questions about data collection, training methods and potential impact on the rights of data subjects.

Studies in this area have already drawn on the perspectives of other prominent regulators, including France's data protection authority, the Commission nationale de l'informatique et des libertés, the U.K. Information Commissioner's Office, Germany's Hamburg Commissioner for Data Protection and Freedom of Information and the European Data Protection Board.

Analyzing their investigations, guidelines and studies can shed light on key data protection considerations shaping the responsible development and use of AI systems in the European landscape.

Data protection compliance throughout the AI life cycle

Given data's extensive use in AI — particularly the training of AI systems on massive datasets — the GDPR comes into play when it relates to an identified or identifiable natural person.

While LLMs may not be explicitly designed to process personal data, they can inadvertently store or indirectly reveal such information through outputs or inferences. The abstract nature of language processing in LLMs doesn't negate potential privacy risks. Therefore, data storage, model outputs, the potential for re-identification — through model attacks, for example — and the evolving nature of technology need to be carefully considered from a data protection and privacy perspective.

Contributors:

Henrique Fabretti Moraes

CIPP/E, CIPM, CIPT, CDPO/BR, FIP

Country Leader, Brazil, IAPP; Managing Partner

Opice Blum

Helena Dominguez Bianchi

CIPP/E

Advogada

Opice Blum, Bruno, Abrusio e Vainzof Advogados Associados

MEMBER

Unlock this exclusive content and more

Join the IAPPAlready a member? Sign in

Membership opens up a world of resources

In-depth knowledge

From original research reports and daily news coverage to legislative trackers and infographics, we have the information you need to stay ahead of change.

A global network

Make valuable professional connections through more than 160 local IAPP KnowledgeNet chapters in 70 countries.

Access to the experts

Connect with top thinkers in privacy, AI governance and cybersecurity for fresh ideas and insights.

Learn what you get from membership