Resource Center / Resource Articles / Global AI Governance Law and Policy: EU
Global AI Governance Law and Policy: EU
This article is part of a five-part series co-sponsored by OneTrust. The full series can be accessed here.
Published: May 2024
Contributors:
Navigate by Topic
The EU has been regulating the digital sphere since the early 2000s through legislation on fundamental and other rights such as data protection and intellectual property; infrastructure through security, public procurement and resilience; technology and software such as RFID, cloud computing and cybersecurity; and data-focused legislation, including data access, data sharing and data governance. The European Commission "is determined to make this Europe's 'Digital Decade'," with regulation a core component to that ambition.
In 2018, the European Commission set out its vision for AI around three pillars: investment, socioeconomic changes and an appropriate ethical and legal framework to strengthen European values. The Commission established a High-Level Expert Group on AI of 51 members from civil society, industry and academia to provide advice on its AI strategy.
In April 2019, the HLEG published its ethics guidelines for trustworthy AI, which put forward a human-centric approach on AI and identified seven key requirements that AI systems should meet to be considered trustworthy.
When European Commission President Ursula von der Leyen took office in December 2019, she pledged to "put forward legislation for a coordinated European approach on the human and ethical implications of Artificial Intelligence" in her first 100 days. In press remarks from February 2020, she mentioned AI's potential to improve Europeans' daily lives and its role in reaching Europe's climate neutrality goals by 2050. She also set a clear objective of attracting more than 20 billion euros per year for the next decade to defend Europe's position on AI.
That announcement coincided with a Commission white paper that set out the policy options for achieving an approach that promotes the uptake of AI while also addressing the risks associated with certain uses of AI.
The AI Act, first proposed by the European Commission in April 2021, was then drafted, negotiated and amended fiercely by the Commission, Parliament and Council. The agreed text will soon enter into force, combining a human-centric philosophy with a product safety approach. The AI Act will be a keystone regulation for the development and deployment of AI in the EU and around the world. The requirements set forth in the act, combined with those that will follow from further guidance and implementation, plus the complex intersections of the act itself with the EU's broader digital governance regulatory framework, make for a deep, dynamic and exacting regulatory ecosystem for AI governance in the EU.
Regulatory approach
The AI Act is a regulation, meaning it is directly applicable in all EU member states, that seeks to guarantee and harmonize rules on AI. Compared to the EU General Data Protection Regulation, which was created to protect individuals' privacy and data protection rights, the initial proposal for an AI Act was born in the context of product safety, focusing on ensuring AI products and services on the EU market are safe. This manifested in proposed principles and requirements that are well established in the product safety context, such as technical specifications, market monitoring and conformity assessments. Many of the AI Act's now-final requirements that also protect individual rights originate from the European Parliament's positions and proposals during the trilogue negotiations with the European Commission and Council.
The AI Act is framed around four risk categories of AI systems. Each category prescribes various risk-based measures that relevant actors in the AI life cycle should take and implement. During the trilogue negotiations on the draft AI Act, requirements were added for general-purpose AI, effectively making it an additional fifth category that, importantly, does not preclude the application of requirements attaching to other risk-based categories. For example, a general-purpose AI system might also fall within the category of high risk.
-
expand_more
Prohibited AI systems
Prohibited AI systems include real-time remote biometric identification in public spaces by law enforcement with some exceptions: social scoring; emotion recognition in schools and workplaces; AI systems that deploy subliminal techniques beyond a person's consciousness or purposefully manipulative or deceptive techniques; AI systems that exploit any of the vulnerabilities of a natural person or a specific group of persons due to their age, disability or a specific social or economic situation; AI systems for making risk assessments of natural persons committing a crime; untargeted scraping of facial images from the internet or CCTV footage; use of biometric systems that categorize individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation.
-
expand_more
High-risk cases
High-risk cases include when AI is used for remote biometric identification systems, but not real-time, which is prohibited, or for emotion recognition and biometric categorization based on sensitive or protected attributes or characteristics. It also includes when AI is used in critical infrastructures, in the selection and evaluation of workers, in credit scoring, by law enforcement agencies, and in migration control by the judicial authorities. Annex III of the AI Act set out a list of high-risk applications. If an Annex III application does not have an impact on fundamental rights, health, safety or final decisions that would have been made by a human being, then it may not be considered high risk. These exemptions are intended for cases in which AI is used for ancillary actions or to improve the outcome of a human-made action. To be exempted from a high-risk classification, it is necessary to provide documentary evidence as to why the AI system should not be considered high risk.
High-risk AI providers are therefore required to have a risk-management system in place during the life cycle of the AI. Risks to health, safety and fundamental rights must first be identified. For risks that cannot be eliminated, solutions will have to be provided so they can be mitigated and managed. For example, deployers will have to be informed and, where appropriate, trained in the use of the system. Particular attention will need to be paid to risks pertaining to children and vulnerable persons.
Training data is required to be as representative as possible of those possibly affected by potentially negative consequences associated with the AI system. The use of sensitive data is only permitted when strictly necessary and when it is not possible to achieve the same optimal results to avoid bias as would be the case if using synthetic or anonymous data. The handling of sensitive data is subject to heightened requirements such as a prohibition on transferring sensitive data to third parties and, once the purpose has been achieved, the sensitive data must be deleted.
-
expand_more
Limited risk
In limited-risk AI systems, providers have fewer obligations save for the requirement to be transparent by pointing out that AI has been used, as in the case of chatbots and deepfakes.
-
expand_more
Minimal risk
No additional measures are needed in minimal risk cases, such as spam filters.
-
expand_more
General-purpose AI
There is an obligation to keep records for general-purpose AI, including records on any copyrighted data used as part of the training data, but general-purpose AI will not necessarily be considered high risk unless it also falls within the relevant categories for high risk.
General-purpose AI will be considered a systemic risk if it carries a specific risk that has a significant impact on the EU market due to its scale or actual or reasonably foreseeable adverse effects on public health, security or fundamental rights. General-purpose AI will also be considered a systemic risk if it has a computational power greater than 10^25 floating point operations or so deemed by the EU AI Office, which will be responsible for such assessments.
Providers of such models will be subject to a number of additional obligations. These include conducting an assessment to identify and mitigate systemic risk, analyzing and mitigating such risks, and documenting and reporting serious incidents. These providers will also have to ensure an adequate level of cybersecurity protection regarding the model and its physical infrastructure.
The AI Office, housed within the European Commission, will supervise AI systems based on a general-purpose AI model in which the same provider provides the model and system. It will have the powers of a market surveillance authority. National authorities are responsible for the supervision and enforcement of all other AI systems. They will lay down rules on penalties and other enforcement measures, including warnings and nonmonetary measures. Penalties range from up to 7% of global annual turnover or 35 million euros for prohibited AI violations, up to 3% of global annual turnover or 15 million euros for most other violations, and up to 1% of global annual turnover or 7.5 million euros for supplying incorrect information to authorities.
National authorities will be coordinated at an EU level via the EU AI Board to ensure consistent application throughout the EU. The AI Board will advise on the implementation of the AI Act, coordinate with national authorities and issue recommendations and opinions. An advisory forum and a scientific panel of experts will assist EU bodies. Notably, a significant number of member states have not yet designated regulators as competent authorities under the AI Act, and there is little information on how EU-level coordination will work in practice.
Wider regulatory environment
Recital 10 of the AI Act recalls how the AI Act "does not seek to affect the application of" the EU GDPR and the ePrivacy Directive, including the tasks and powers of the relevant authorities tasked with overseeing and enforcing those laws.
AI systems will remain subject to the GDPR to the extent they process personal data. No exception to the six legal bases for processing personal data under GDPR Article 6 has been introduced for the processing of data for AI training purposes. Pending guidelines from the EDPB to this effect, the GDPR legal bases are to be applied as before. The same applies to GDPR principles such as data minimization, privacy by design and privacy by default, which will likely conflict with organizations' business and regulatory needs to use large datasets for AI training. The interplay between GDPR principles and the AI Act in practice will undoubtedly give rise to frictions, though their scope and depth have yet to be worked through.
Article 22 of the GDPR, which grants data subjects the right not to be subject to decisions based solely on automated processing that have significant consequences, is complemented by Article 86 of the AI Act, which affords individuals the right to explanations of individual decision-making.
The member state data protection authorities remain the enforcement authorities for the GDPR when it comes to protecting personal data used in the context of AI, even if they are not designated as the competent authorities under the AI Act. For European institutions, the competent authority is the European Data Protection Supervisor. In recent years, from the use of biometric recognition for surveillance purposes to recent developments in the field of large language models, DPAs have been active in raising the security level of these systems or banning them when the risks to fundamental rights were too high.
Article 27 of the AI Act, introduced by European Parliament, requires the completion of a fundamental rights impact assessment for high-risk AI used by public entities or private entities providing public services, such as banks and insurance companies. If, for these deployers, a data protection impact assessment already exists under the GDPR, the DPIA will be an integral part of the FRIA.
-
expand_more
Copyright
The issue of copyright was briefly mentioned in the initial proposal of the AI Act, but following the emergence of new general-purpose AI applications on the market, rights holders demanded and obtained amendments intended to protect them.
Article 53 of the AI Act contains an explicit reference to Article 4(3) of the EU Copyright Directive 2019/790, which provides for the possibility of extracting data from databases to which one has legitimate access for data mining purposes. The article, which only concerns private entities, stipulates that rights holders may opt out of having their data mined. Therefore, if the rights holder objected, the provider would have to obtain that right by means of a license. There are no limits, however, when data mining is done by research organizations, according to Article 3 of the directive.
The AI Act also requires organizations to provide a detailed summary of the content used in their dataset to train AI for transparency purposes. To facilitate this, the AI Office will provide a form that allows providers to present this information uniformly. This is intended to assist rights holders in verifying that their works have not been used illegitimately.
-
expand_more
Digital Services Act
For a period during the draft AI Act trilogue negotiations, social media recommendation systems qualified as high-risk AI applications although that was later removed. However, social media and platforms producing synthetic content with AI will have to be watermarked. Digital Services Act Article 35 notes the application of watermarks could also be recommended for AI-generated content created outside the platform and uploaded to it afterward to prevent cases of systemic risk, especially in view of election periods.
-
expand_more
Product Liability Directive
The AI Act does not regulate liability for damages resulting from AI, only the violation of the regulation's provisions concerning the safety of products and services offered, with administrative enforcement issued by national authorities.
On 14 Dec. 2023, the EU legislative institutions reached a provisional agreement on the Product Liability Directive, updating the EU's 40-year-old regulatory framework with several important proposals relevant for AI governance, including:
- Expanding the definition of "product” to encompass digital manufacturing files and software. However, free and open-source software developed or supplied outside commercial activities falls outside the directive's scope.
- Broadening the definition of damage to include medically recognized harm to psychological health, along with the destruction or irreversible corruption of data.
- Extending the right to claim compensation to cover nonmaterial losses resulting from the damage.
- Easing the burden of proof, which remains on the injured party.
- Extending the liability period to 25 years in exceptional cases when symptoms take time to manifest.
- Introducing a cascade of attributable liability for economic operators.
The Product Liability Directive outlines various scenarios where a product is presumed to be defective or when a causal link between a defect and damage is presumed to exist.
-
expand_more
AI Liability Directive
The proposal for a new Product Liability Directive was published with a specific proposal on AI liability. However, unlike the Product Liability Directive, no political agreement was finalized on the AI Liability Directive before to European elections in June 2024. It remains to be seen whether the AI Liability Directive will continue in its legislative procedure, be rewritten or be abandoned during the forthcoming the 2024-29 mandate.
-
expand_more
The EU Platform Workers Directive
According to the new Platform Workers Directive, platform workers are protected from dismissal solely based on decisions made by algorithms or automated systems. Human oversight is mandated for any decisions that impact the working conditions of individuals.
Platforms are prohibited from processing specific personal data of their workers, such as private communications with colleagues or personal beliefs. Additionally, platforms must inform workers about the utilization of algorithms and automated systems in various aspects including recruitment, working conditions and earnings.
-
expand_more
Use of data
Concerning the use of data for training purposes, companies will need to consult recent regulations that facilitate the circulation, transfer and portability of data, including personal data with due safeguards from the Data Governance Act, the Data Act and the European Health Data Space.
-
expand_more
Cybersecurity
The European Union Agency for Cybersecurity is working on guidelines for emerging technologies.
The AI Act will enter into force 20 days after its publication in the Official Journal of the EU. That signals the starting point for its phased approach to implementation and enforcement, with some of the nearest term obligations, such as prohibited uses, applying by six months. The issuance of further guidance, rulemaking and enforcement by appropriate national and pan-EU regulators and bodies will add more clarifying or complexifying depth to the field of AI governance in the EU. Beyond the AI Act, many expect the next European Commission to continue or initiate regulatory work that seeks to address the tensions between AI and intellectual property, as well as the issue of AI in the workplace, AI in health and life sciences, and AI liability.
This will unfold as implementation and enforcement of the data strategy initiatives — the Data Services Act, Digital Markets Act, Data Act, Data Governance Act and data spaces like the European Health Data Space — hit full throttle, adding to the GDPR, intellectual property and product liability rules to name a few. The complexity is already crystallizing in litigation and enforcement. Many European DPAs are claiming the AI space and so are other competition and sectoral regulators. Organizations will have to factor in this intricate web of requirements and supervision as they build their AI governance programs, while also serving their business objectives.
Regardless of election results, the incoming EU leadership will likely continue to promote the EU model on the global stage, further projecting the "Brussels effect" of digital regulation.
Additional resources
-
expand_more
General AI resources
-
expand_more
Privacy and AI governance resources
Global AI Governance Law and Policy: Jurisdiction Overviews
The overview page for this series can be accessed here. The full series is additionally available here in PDF format.