Resource Center / Resource Articles / Global AI Governance Law and Policy: EU

Global AI Governance Law and Policy: EU

This article is part of a five-part series co-sponsored by OneTrust. The full series can be accessed here.


Published: May 2024


Contributors:


Click to View (PDF)

Navigate by Topic

The EU has been regulating the digital sphere since the early 2000s through legislation on fundamental and other rights such as data protection and intellectual property; infrastructure through security, public procurement and resilience; technology and software such as RFID, cloud computing and cybersecurity; and data-focused legislation, including data access, data sharing and data governance. The European Commission "is determined to make this Europe's 'Digital Decade'," with regulation a core component to that ambition.

In 2018, the European Commission set out its vision for AI around three pillars: investment, socioeconomic changes and an appropriate ethical and legal framework to strengthen European values. The Commission established a High-Level Expert Group on AI of 51 members from civil society, industry and academia to provide advice on its AI strategy.

In April 2019, the HLEG published its ethics guidelines for trustworthy AI, which put forward a human-centric approach on AI and identified seven key requirements that AI systems should meet to be considered trustworthy.

When European Commission President Ursula von der Leyen took office in December 2019, she pledged to "put forward legislation for a coordinated European approach on the human and ethical implications of Artificial Intelligence" in her first 100 days. In press remarks from February 2020, she mentioned AI's potential to improve Europeans' daily lives and its role in reaching Europe's climate neutrality goals by 2050. She also set a clear objective of attracting more than 20 billion euros per year for the next decade to defend Europe's position on AI.

That announcement coincided with a Commission white paper that set out the policy options for achieving an approach that promotes the uptake of AI while also addressing the risks associated with certain uses of AI.

The AI Act, first proposed by the European Commission in April 2021, was then drafted, negotiated and amended fiercely by the Commission, Parliament and Council. The agreed text will soon enter into force, combining a human-centric philosophy with a product safety approach. The AI Act will be a keystone regulation for the development and deployment of AI in the EU and around the world. The requirements set forth in the act, combined with those that will follow from further guidance and implementation, plus the complex intersections of the act itself with the EU's broader digital governance regulatory framework, make for a deep, dynamic and exacting regulatory ecosystem for AI governance in the EU.


Regulatory approach

The AI Act is a regulation, meaning it is directly applicable in all EU member states, that seeks to guarantee and harmonize rules on AI. Compared to the EU General Data Protection Regulation, which was created to protect individuals' privacy and data protection rights, the initial proposal for an AI Act was born in the context of product safety, focusing on ensuring AI products and services on the EU market are safe. This manifested in proposed principles and requirements that are well established in the product safety context, such as technical specifications, market monitoring and conformity assessments. Many of the AI Act's now-final requirements that also protect individual rights originate from the European Parliament's positions and proposals during the trilogue negotiations with the European Commission and Council.

The AI Act is framed around four risk categories of AI systems. Each category prescribes various risk-based measures that relevant actors in the AI life cycle should take and implement. During the trilogue negotiations on the draft AI Act, requirements were added for general-purpose AI, effectively making it an additional fifth category that, importantly, does not preclude the application of requirements attaching to other risk-based categories. For example, a general-purpose AI system might also fall within the category of high risk.

  • expand_more

  • expand_more

  • expand_more

  • expand_more

  • expand_more

The AI Office, housed within the European Commission, will supervise AI systems based on a general-purpose AI model in which the same provider provides the model and system. It will have the powers of a market surveillance authority. National authorities are responsible for the supervision and enforcement of all other AI systems. They will lay down rules on penalties and other enforcement measures, including warnings and nonmonetary measures. Penalties range from up to 7% of global annual turnover or 35 million euros for prohibited AI violations, up to 3% of global annual turnover or 15 million euros for most other violations, and up to 1% of global annual turnover or 7.5 million euros for supplying incorrect information to authorities.

National authorities will be coordinated at an EU level via the EU AI Board to ensure consistent application throughout the EU. The AI Board will advise on the implementation of the AI Act, coordinate with national authorities and issue recommendations and opinions. An advisory forum and a scientific panel of experts will assist EU bodies. Notably, a significant number of member states have not yet designated regulators as competent authorities under the AI Act, and there is little information on how EU-level coordination will work in practice.


Wider regulatory environment

Recital 10 of the AI Act recalls how the AI Act "does not seek to affect the application of" the EU GDPR and the ePrivacy Directive, including the tasks and powers of the relevant authorities tasked with overseeing and enforcing those laws.

AI systems will remain subject to the GDPR to the extent they process personal data. No exception to the six legal bases for processing personal data under GDPR Article 6 has been introduced for the processing of data for AI training purposes. Pending guidelines from the EDPB to this effect, the GDPR legal bases are to be applied as before. The same applies to GDPR principles such as data minimization, privacy by design and privacy by default, which will likely conflict with organizations' business and regulatory needs to use large datasets for AI training. The interplay between GDPR principles and the AI Act in practice will undoubtedly give rise to frictions, though their scope and depth have yet to be worked through.

Article 22 of the GDPR, which grants data subjects the right not to be subject to decisions based solely on automated processing that have significant consequences, is complemented by Article 86 of the AI Act, which affords individuals the right to explanations of individual decision-making.

The member state data protection authorities remain the enforcement authorities for the GDPR when it comes to protecting personal data used in the context of AI, even if they are not designated as the competent authorities under the AI Act. For European institutions, the competent authority is the European Data Protection Supervisor. In recent years, from the use of biometric recognition for surveillance purposes to recent developments in the field of large language models, DPAs have been active in raising the security level of these systems or banning them when the risks to fundamental rights were too high.

Article 27 of the AI Act, introduced by European Parliament, requires the completion of a fundamental rights impact assessment for high-risk AI used by public entities or private entities providing public services, such as banks and insurance companies. If, for these deployers, a data protection impact assessment already exists under the GDPR, the DPIA will be an integral part of the FRIA.

  • expand_more

  • expand_more

  • expand_more

  • expand_more

  • expand_more

  • expand_more

  • expand_more


Next steps

The AI Act will enter into force 20 days after its publication in the Official Journal of the EU. That signals the starting point for its phased approach to implementation and enforcement, with some of the nearest term obligations, such as prohibited uses, applying by six months. The issuance of further guidance, rulemaking and enforcement by appropriate national and pan-EU regulators and bodies will add more clarifying or complexifying depth to the field of AI governance in the EU. Beyond the AI Act, many expect the next European Commission to continue or initiate regulatory work that seeks to address the tensions between AI and intellectual property, as well as the issue of AI in the workplace, AI in health and life sciences, and AI liability.

This will unfold as implementation and enforcement of the data strategy initiatives — the Data Services Act, Digital Markets Act, Data Act, Data Governance Act and data spaces like the European Health Data Space — hit full throttle, adding to the GDPR, intellectual property and product liability rules to name a few. The complexity is already crystallizing in litigation and enforcement. Many European DPAs are claiming the AI space and so are other competition and sectoral regulators. Organizations will have to factor in this intricate web of requirements and supervision as they build their AI governance programs, while also serving their business objectives.

Regardless of election results, the incoming EU leadership will likely continue to promote the EU model on the global stage, further projecting the "Brussels effect" of digital regulation.


Additional resources


Global AI Governance Law and Policy: Jurisdiction Overviews

The overview page for this series can be accessed here. The full series is additionally available here in PDF format.



Approved
CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 2

Submit for CPEs