Top 10 operational impacts of the EU AI Act – Leveraging GDPR compliance

This article is part of a series on the operational impacts of the EU AI Act. The full series can be accessed here.


Published: November 2024


Contributors:


Navigate by Topic

The EU AI Act mentions the EU General Data Protection Regulation, Regulation (EU) 2016/679, more than 30 times throughout its recitals and articles, which define the European framework for the development and deployment of high-risk AI systems and general-purpose AI models.

This does not come as a surprise, as many AI models are trained with datasets, including personal data, and most AI systems are used by humans who can be identified by their usernames or other log-in credentials.

In addition, both regulations aim to protect the fundamental rights of individuals and the responsible use of data, as outlined in Recital 10 of the AI Act. The GDPR safeguards the right to the protection of personal data in particular. The AI Act focuses primarily on the health and safety of individuals, as well as other fundamental rights protecting democracy, the rule of law or the environment.


Personal data and AI

The AI Act includes specific rules that cover biometric data, profiling and automated decision-making, which are also within the scope of the GDPR. Furthermore, the AI Act clarifies the GDPR always applies when personal data is processed. These regular processing scenarios are also subject to GDPR rules, when the processing takes place within its territorial scope per Article 2 and when the processed data is personal, meaning it relates to the data subject, an identified or identifiable natural person, per Article 4.

If personal data is used to train an AI model to improve a picture uploaded to an online photo editor, or simply because a user logs onto the AI system with their name and email, the GDPR rules need to be followed as usual. First and foremost, that means processing personal data requires a legal basis, according to GDPR Article 6. In the context of model training and the deployment of AI systems, there are three main legal grounds to consider: the legitimate interests pursued by the controller or by a third party, contractual necessity, and the data subject's consent. Also, other legal grounds can justify processing personal data in specific circumstances, such as when the "vital interests" of a data subject are protected in emergency situations.

The AI Act specifically addresses the use of sensitive personal data or "'special categories of personal data," in the language of GDPR Article 9. Article 10 of the AI Act provides legal grounds for processing these special categories of sensitive data, specifically and exclusively for bias detection and correction in relation to high-risk AI systems. However, this exception only applies if certain conditions are met. Among others, the use of other nonsensitive data, including synthetic or anonymized data, is not sufficient to ensure bias is appropriately addressed in high-risk AI systems. The AI Act also requires sensitive personal data used for bias mitigation to be safeguarded by technical measures, including the pseudonymization of sensitive data, to limit the reuse of the data and, more broadly, to enhance security and privacy protection.

Privacy-enhancing technologies are important tools to solve the potential conflict between the GDPR's data minimization principle and the requirement to process large datasets, which can help ensure AI systems make fair and accurate assumptions. Various PETs, including anonymization, synthetic data, federated learning and fully homomorphic encryption, also available as open source or "as a service," can help unlock the value of personal data in the AI context in compliance with the GDPR rules.


Common principles and approaches

The GDPR kicks in only when personal data is processed, regardless of whether AI is involved. In contrast, the AI Act applies irrespective of whether personal or nonpersonal data is used. Nevertheless, both regulations share some common principles and approaches to implementing their respective provisions, and both are are well known to most privacy professionals. Key principles like accountability, fairness, transparency, accuracy, storage limitation, integrity and confidentiality, which are fundamental for processing personal data under Article 5 of the GDPR, are also enshrined in the AI Act and, as such, are not a novelty to companies processing personal information.

  • expand_more

  • expand_more

  • expand_more

  • expand_more

  • expand_more

  • expand_more


AI and privacy compliance approaches

Compliance with privacy laws requires a systematic approach that stretches across all levels of an organization, large or small. With the applicability of the GDPR and other similar privacy laws, many companies implemented global privacy management systems to cope with the rapidly expanding regulatory landscape and the increasing amount of personal and nonpersonal data utilized in a business context. In many cases, an existing privacy management or, more broadly, a governance, risk and compliance system is the ideal starting point to tackle the AI-related requirements stemming from the AI Act and the other AI-adjacent laws that will emerge over the coming months and years.

A core element of each privacy management system is an inventory of data processing activities, flows and applications using personal data. Organizations can leverage this inventory to include AI models, applications and the data used to develop and operate AI systems. Such an integrated governance system can capture, integrate and make transparent the metadata related to the entire AI life cycle from design to deployment to everyday use, as well as assess and monitor the risks related to the processing of sensitive personal data and the specific risks associated with AI models, such as general purpose AI models with systemic risks, per AI Act Article 51, or high-risk AI systems.

Privacy management systems are often the gateway to relevant documents, such as data processing agreements, vendor contracts, consent forms, records of processing activities and other key performance indicators, like the number of registered inventory assets, response time to data subject requests, number of DPIAs or privacy training completion rates.

For compliance with the AI Act, and AI governance in general, these features can also be used to help create AI fact sheets, establish user information, or collect and analyze the behavior of AI systems, providing relevant information and KPIs in dashboard views.

While existing privacy management approaches are a good starting place, AI governance has unique challenges. The interplay between personal and nonpersonal data, AI models and AI systems is much more dynamic and complex compared to the normal privacy environment. Also, the deep technical expertise of data engineers and data scientists is required to fulfill certain requirements of the AI Act. Automation is a key component in helping manage that complexity and apply technical skills at scale. AI and data governance platforms can provide the tools needed for an integrated and continuous compliance approach, which supports organizations in coping with the plethora of new privacy, data governance and AI regulations, such as the AI Act.


Additional resources


Top 10 operational impacts of the EU AI Act



Approved
CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 2

Submit for CPEs