Resource Center / Resource Articles / Global AI Governance Law and Policy: EU
Global AI Governance Law and Policy: European Union
This article, part of a series co-sponsored by HCLTech, analyzes the laws, policies, and broader contextual history and developments relevant to AI governance in the European Union. The full series can be accessed here.
Published: October 2025
Contributors:
Navigate by Topic
The EU has been a global first mover in adopting comprehensive digital legislation, for example, with the General Data Protection Regulation in 2016 and the EU Artificial Intelligence Act in 2024. While the GDPR arguably set a global privacy standard, creating what has been deemed the “Brussels Effect” and spurring comparable privacy laws in places like the U.K., Brazil, and many U.S. states, the AI Act is less certain to guide the global approach to regulating AI in light of the many different regulatory models emerging. Even acknowledging the heterogeneous AI governance global landscape, the AI Act remains hugely impactful.
The EU was the first jurisdiction globally to adopt a comprehensive legal regime governing the development and use of artificial intelligence. The AI Act entered into force 1 Aug. 2024 with a phased application and is the result of years of preparatory work.
The process of making this regulation a reality started in 2018 as the European Commission set out its vision for AI around three pillars: investment, socioeconomic changes and an appropriate ethical and legal framework to strengthen European values. These pillars still support the overall EU AI strategy across Europe internally and globally as Brussels ascertains its approach to shaping ethical and safe AI innovations.
The AI Act is also part of a broader and consistent approach to digital policy rulemaking by Brussels, cutting across data, infrastructure and online rights policies anchored in European values. It builds on an intricate regulatory and legislative landscape and by extension the compliance architectures that organizations have already put in place across data governance domains. In 2025 and beyond, the focus of AI Act implementation rests increasingly on its integration with the many regulatory tools with which it intersects.
The first deadline of the phased implementation of the EU AI Act kicked in 2 Feb. 2025 with the application of prohibitions on unacceptable risk AI and AI literacy requirements. On 2 Aug. 2025, obligations began for providers of general-purpose AI, for member states to appoint designated competent authorities, and the Commission launched its first review of the prohibited‑AI list.
GPAI Code of Practice
The European Commission published the final version of the General-Purpose AI Code of Practice in July 2025, ahead of the formal deadline though after its earlier self-imposed deadline. The voluntary tool is meant to help providers of GPAI models demonstrate compliance with obligations in Articles 53 and 55, with a focus on transparency, copyright, and safety and security, provided they sign on to the code in its entirety.
The code took months of drafting, leveraging the expertise of over 1,000 independent representatives from industry, civil society, academia and member states. It went through several iterations to clarify and consolidate a complex table of content.
A week after it was finalized, the code was officially opened for signatures, with several companies — such as Aleph Alpha, Cohere, IBM, MistralAI and OpenAI — joining in shortly after. The Commission announced that it will narrowly focus its enforcement against signatories on the demonstration of their compliance, because adherence to the code entails increased transparency on the part of providers.
Separately, the European Commission launched in September 2025 a consultation for the development of another code of practice on AI transparency obligations, which becomes applicable on 2 Aug. 2026. The Commission will also rely on stakeholders to develop the Code of Practice, though it has not confirmed the expected date of publication.
Enforcement and regulators overview
EU member states were tasked with designating or establishing national competent authorities — at least one market surveillance and one notifying authority per country — by 2 Aug. 2025 and identifying which one of them would act as a single point of contact. As of mid-September, about a third of member states had met the deadline.
Looking at structures that have been either formalized or announced leading up to the 2 Aug. deadline, no single governance model has emerged.
Member‑state designs vary from decentralized, sector‑inflected networks in some; to more centralized models in others. Communications and cybersecurity regulators often anchor coordination; DPAs are key participants but not always leads. Spain and Hungary are setting up new AI‑specific bodies/authorities. Hungary will constitute a new market‑surveillance authority under the act while Spain’s AI Supervisory Agency is expected to act as the single point of contact.
The diversity of this pan-European architecture brings organizations and regulators alike into uncharted territory. This will have strong implications for both groups of stakeholders. Organizations should map their lead regulators and establish early engagement routes to build these relationships; agencies and regulators must find common language and interpretation of the law, and ways to cooperate and coordinate their action.
To facilitate the EU AI Act’s implementation, the European Commission has been tasked with developing guidelines on various provisions of the act. Over the past year, it released guidance on topics such as prohibited AI practices and AI system definition.
The Commission still has a long list of deliverables to produce, including the interplay of the EU AI Act with other EU legislation. It is drafting guidelines on classifying high-risk AI systems, informed by input on both practical examples and specific high-risk AI issues, and on responsibilities along the AI value chain.
Launched in 2024 as a voluntary framework by the European Commission, the AI Pact was meant to help organizations kick start their compliance journey ahead of the official application of the AI Act. The pact is built on two pillars, one focused on knowledge sharing between all interested stakeholders and the other directed at providers and deployers specifically for them to share practices and demonstrate voluntary commitments to prepare for the early implementation of the AI Act. Since its launch, the AI Office hosted a series of AI Pact webinars to explore topics such as the architecture of the AI Act, AI literacy obligations, and the GPAI Code of Practice.
In addition, the EU AI Office launched an AI Act Service Desk as a central initiative to help stakeholders navigate the AI Act’s requirements. As part of the Service Desk, the AI Act Single Information Platform provides the AI Act Explorer, an online tool to navigate the AI Act legal text; a compliance checker to assist in evaluating whether AI systems and general-purpose AI models meet the requirements set by the AI Act; and a portal to national resources.
In April 2025 the European Commission launched the AI Continent Action Plan to transform Europe into a leading AI continent. This initiative defines the EU’s current approach to AI and includes several actions to boost trustworthy and human-centric AI development and, as a result, enhance the EU’s innovation and competitiveness while ensuring that democratic values and fundamental rights are safeguarded. These actions fall under five strategic areas: computing infrastructure, data, skills, development of algorithms and adoption, and simplify rules.
Computing infrastructure
The Commission is scaling computational power through AI Factories and planned AI Gigafactories — EuroHPC‑linked supercomputers, data centers and talent pipelines open to startups, SMEs, research and public users. At least 13 AI Factories are slated to be operational by 2026, with up to five AI gigafactories to follow, all offering access to European users across industry, research and the public sector.
A proposed EU Cloud and AI Development Act aims to triple cloud capacity over 5–7 years and may add security and data‑localization requirements for critical workloads. The proposal for this is currently expected in March 2026.
Data
A European Data Union Strategy would simplify rules, expand access via Common European Data Spaces and establish Data Labs within AI Factories; a Commission Communication is due this year.
Skills
The EU will develop and attract AI talent through an AI Skills Academy, European Digital Innovation Hubs and links to (giga)factories and research programs.
Development of algorithms and adoption
The Apply AI Strategy published on 8 October by the Commission aims to accelerate AI development, adoption and use, across the EU’s strategic and public sectors, such as health care, pharmaceuticals, manufacturing, construction and defense. It promotes European AI solutions and encourages organizations to adopt an ”AI first” policy.
Simplify rules
Streamlining documentation, record keeping and incident/information‑sharing obligations across digital policies are being considered to ease AI Act implementation without weakening core safeguards.
Funding and Investment
The EU will need to pool enough funding to support these ambitions. It launched the InvestAI Initiative at the beginning of this year to mobilize 200 billion euros of investment in AI. The funding is planned to come from different sources, including existing EU funding initiatives, such as Horizon Europe, Digital Europe, but also private investment.
Agentic AI, autonomous, adaptive systems, raise novel risks but fall squarely within the AI Act’s technology‑neutral, risk‑based approach.
High‑risk agentic uses such as employment, education, and law enforcement must meet Chapter III obligations — risk management, data governance, documentation, quality management, post‑market monitoring — and ensure effective human oversight (Article 14).
Unacceptable‑risk behaviors such as manipulation and exploitation of vulnerabilities are prohibited under Article 5. Agentic AI that is also GPAI entails additional provider obligations, e.g., technical documentation, and cooperation with authorities.
Because agentic systems can self‑update, risk levels may evolve — underscoring continuous monitoring and in‑life change control.
Many agentic uses will also trigger the GDPR, including automated decision‑making limits and core data‑protection principles, and, for connected devices, the Data Act’s access and interoperability rules.
The bottom line is that many EU digital laws are technology-neutral and designed to adapt to future technological innovation. Although agentic AI may not be mentioned by name within a law, the scope of the law may still capture this new application of machine learning.
The AI Act and broader European AI initiatives fold into a comprehensive policy agenda that cuts across digital responsibility domains. Several policy areas could be subject to legislative changes, which would have implications for the AI legislative framework. Among others, two would have big impacts.
AI liability remains a question mark on the Commission’s agenda. After proposing a dedicated Directive in 2022, it was withdrawn early this year due to lack of consensus on the general direction to take. It remains to be seen whether and how this topic will be addressed at the EU level.
The EU copyright framework is also in question, prompting a discussion on its necessary review and update to reflect technological developments, particularly in AI.
The EU AI Act’s implementation requires the development of standards that will translate the act’s legal requirements into technical requirements. In April, CEN-CENELEC, an organization tasked with standard development, noted possible delays in the delivery of standards.
Over the summer, public debate has mounted with some stakeholders calling to postpone the AI Act’s remaining implementation deadlines. European Commission Executive Vice-President Henna Virkkunen expressed in June the possibility of postponing "some parts of the AI Act in the coming months," though firmly brushed off any equivocating on the implementation itself.
This pressure comes at a time when AI is caught in trade and political friction between Brussels and Washington. The complex global geopolitics surrounding the trans-Atlantic relationship at this moment have also prompted the EU to reaffirm both its sovereignty to design its own rules as well as its ambitions to promote transparent, accountable, and human-centric AI.
Articles in this series, co-sponsored by HCLTech, dive into the laws, policies, broader contextual history and developments relevant to AI governance across different jurisdictions. The selected jurisdictions are a small but important snapshot of distinct approaches to AI governance regulation in key global markets.
Each article provides a breakdown of the key sources and instruments that govern the strategic, technological and compliance landscape for AI governance in the jurisdiction through voluntary frameworks, sectoral initiatives or comprehensive legislative approaches.
Global AI Governance Law and Policy
Jurisdiction Overviews 2025
The overview page for this series can be accessed here.
- Australia
- Canada
- China
- European Union
- India
- Japan
- Singapore
- South Korea
- United Arab Emirates
- United Kingdom
- United States
- Supplementary article: AI governance in the agentic era
-
expand_more
Additional AI resources