Resource Center / Resource Articles / Global AI Governance Law and Policy: UK

Global AI Governance Law and Policy: United Kingdom

This article, part of a series co-sponsored by HCLTech, analyzes the laws, policies, and broader contextual history and developments relevant to AI governance in the United Kingdom. The full series can be accessed here.


Published: October 2025


Contributors:


Navigate by Topic

Though the U.K. does not have any legislation specific to the regulation or governance of artificial intelligence, it does have an AI Security Institute and a variety of relevant principles-based soft law and policy initiatives, as well as binding regulations in other domains like data protection and online safety. The AI Security Institute, which started life as the AI Safety Institute, launched at the world's first global AI Safety Summit held in the U.K. in November 2023. However, in February 2025 AISI changed its name to reflect its change in focus to serious AI risks with security implications, such as using AI for developing weapons, as opposed to safety issues and risks, e.g., bias and discrimination.

The U.K. has taken a decentralized, principles-based approach with cross-sector regulators expected to set binding guidelines and enforce the core principle set by the U.K. government. The development, integration and responsible governance of AI is a strategic priority across U.K. policymaking and regulatory capacity building with a focus on enabling existing regulators to enforce their core principles. An AI bill was announced in the King's Speech in July 2025, but it would only regulate the most powerful AI models. The timing and scope of such a bill has since changed, with no formal bill expected until the next King's Speech, reportedly in May 2026.


History and context

The U.K. has long played an important role in the development of AI. In the 1950s and 60s, the potential of AI-generated enthusiasm and expectation led to the formation of several major AI research centers in the U.K. at the universities of Edinburgh, Sussex, Essex and Cambridge. Even today, the U.K. is regarded as a center of expertise and excellence regarding AI research and innovation.

Fast forward to September 2021, when the U.K. government's National AI Strategy announced a 10-year plan "to make Britain a global AI superpower." That plan set the stage for ongoing consideration as to whether and how to regulate AI, noting, with emphasis, AI is not currently unregulated by virtue of other applicable laws. Since 2018, the prevailing view in U.K. law and policymaking circles has been that "blanket AI-specific regulation, at this stage, would be inappropriate" and "existing sector-specific regulators are best placed to consider the impact on their sector of any subsequent regulation which may be needed."

A consequence of the U.K. leaving the EU is that the EU AI Act does not directly apply in the U.K. as it does to the remaining 27 EU member states. However, the act does have extra-territorial scope that will certainly impact U.K. businesses. Indeed, the EU AI Act has accelerated and amplified independent U.K. policy development on whether, how and why AI should or could be regulated further and in ways more targeted than what exists, via the application of existing laws to AI.

The U.K. continues to forge its own path, instead focusing on flexibility, innovation and sector-specific regulatory expertise when it comes to AI regulation. The aim is to take a proportionate approach to regulation, with the government tracking AI development and only legislating where they deem it necessary.

In Tortoise Media's September 2024 Global AI Index, which benchmarks nations on their level of investment, innovation and implementation of AI, the U.K. maintained its ranking in fourth place, below the U.S., China and Singapore. The U.K. is "strong on commercial AI" and research but other countries are catching up fast, e.g., France, currently in fifth place, "now outperforms [the U.K.] on open-source [large language model] development and in other key areas including public spending and computing." This is, therefore, likely one of the many reasons why the U.K. agreed to the Tech Prosperity Deal with the U.S., obtaining an investment of over USD41 billion from U.S. businesses into U.K. AI infrastructure.


Approach to regulation

As general context, there is no draft or current U.K. legislation that specifically governs AI, except for a Private Member's Bill in the House of Lords, although such bills rarely become law. Instead, the U.K. government has relied on the existing body of legislation, which doesn't specifically regulate AI but undoubtedly applies to its development and deployment. For instance, the U.K. General Data Protection Regulation and Data Protection Act 2018 apply to AI. The government has also focused its efforts on soft law initiatives, e.g., cross-sector regulatory guidelines, to adopt an incremental, pro-innovation approach to AI regulation.

As already mentioned, an AI bill was announced in July 2024. However, due to the protracted legislative passage of the Data (Use and Access) Act 2025 — which was held up by unsuccessful attempts to include provisions relating to the use of copyright material to train AI — new assurances were sought on AI and copyright. The act includes a requirement for the secretary of state to report on the use of copyright works in the development of AI systems. The secretary of state must also report on the economic impact of the policy options proposed in the copyright and AI consultation paper by 19 March 2026. Any AI bill, expected in the second half of 2026 at the earliest, will likely deal with copyright matters as well as the most powerful AI models.

White paper on AI regulation and consultation response

In March 2023, the former U.K. Conservative party government published its white paper "A pro-innovation approach to AI regulation" for consultation, setting out policy proposals regarding future regulation.

Notably, the document does not define AI or an AI system but explains the concepts are characterized by adaptivity and autonomy, aligning with commonly accepted definitions of AI, as used in the Organisation for Economic Co-operation and Development and the EU's AI Act definitions of an AI system. It goes on to describe that the U.K.'s AI regulatory framework should be based on the following five cross-sectoral nonbinding principles: safety, security and robustness; appropriate transparency and explainability; fairness; accountability; and contestability and redress. Finally, the white paper does not propose the creation of a new AI regulator; instead, it advocates for the empowerment of existing regulators.

In February 2024, the government published its response to the white paper's consultation, which largely reaffirmed its prior proposals with one important caveat. The response indicated future legislation is likely to "address potential AI-related harms, ensure public safety, and let us realize the transformative opportunities that the technology offers." However, the government will only legislate when it is "confident that it is the right thing to do."

AI Opportunities Action Plan

In January 2025, the U.K. launched its AI Opportunities Action Plan, a strategic initiative aimed at leveraging the transformative capabilities of AI across multiple sectors, with the objective of establishing the U.K. as an AI superpower.

The plan is structured around three pillars: laying the foundations to enable AI and investing in AI infrastructure; promoting the adoption of AI particularly across the public sector, positioning it as the "largest customer and as a market shaper"; and securing the future of homegrown AI by positioning the U.K. as "national champions at the frontier of economically and strategically important capabilities."

However, the plan says very little about regulation and instead is much more focused on investment and infrastructure to encourage innovation and support and the growth of AI. As mentioned above, the recently announced Tech Prosperity Deal with the U.S. will fund some of the proposed investments into U.K. infrastructure.

More recent developments indicate the U.K. is moving away from the EU and its legislative approach and more towards the U.S. and an innovation approach with limited safeguards. With the economic rewards of AI at stake, this might not be entirely surprising. That said, the U.K., U.S. and EU are all signatories to the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law, meaning equality must be respected and discrimination prohibited throughout the AI lifecycle. The developments over the next twelve months merits close attention.


UK regulator guidelines

U.K. regulators have continued to produce guidelines related to their own sectors. There has also been some cross-functional work on AI issues, such as with the Digital Regulation Cooperation Forum, which consists of the Information Commissioner's Office, Competition and Markets Authority, Ofcom and the Financial Conduct Authority. The DRCF is responsible for ensuring greater regulatory cooperation on online issues.

Data protection

The ICO has been actively regulating how data is used in connection with AI for a long time, updating their AI and data protection guidance in March 2023. In November 2024, the ICO published their audit outcomes report and recommendations for AI providers and developers of AI-powered sourcing, screening and selection tools used in the recruitment process. Following their consultation series on generative AI and data protection, the ICO published their outcomes report in December 2024.

In June 2025, the ICO announced their AI and biometrics strategy to ensure that AI and biometric technology is developed and deployed lawfully, responsibly and in ways that maintain public trust. The ICO recognizes the significant opportunities for innovation such technology presents but emphasizes that it must be used in ways that protect personal data and uphold individual rights. A number of guidelines are expected on key topics as part of this strategy.

Following the enactment of the Data (Use and Access) Act 2025, there are also a number of ongoing consultations and revisions to guidance. ICO's automated decision-making and profiling guidance is of particular interest; the consultation for which is expected to launch in fall 2025, with final guidance expected to be published in spring 2026.

Online safety

In March 2025, Ofcom released its guidance on applying the Online Safety Act to generative AI and chatbots in the form of an open letter.

Competition and markets

The CMA and ICO issued a joint statement regarding foundation model approaches in March 2025. The joint statement expressed the organizations' ongoing commitment to collaborate on various initiatives that enhance user autonomy and control, ensure fair access to data, and distribute accountability appropriately across the foundation model supply chain.


Other UK AI governmental/parliamentary initiatives

Despite the U.K. government's continued presence at the global AI Action Summit, the focus has moved away from AI safety and towards "strengthening international action towards artificial intelligence." While safety is still on the agenda, it is no longer the primary focus of these summits. The U.K. government opted not to sign the official agreement produced at the February 2025 Paris AI Action Summit, expressing concerns around "global governance" and national security. The policy direction adopted at the next Summit, scheduled to be held in India in late 2025, will be of significant interest.

It is also worth noting that while only certain provisions of the EU AI Act currently apply in Northern Ireland, the European Commission has proposed to add the EU AI Act to the Windsor Framework, making it directly applicable as a whole to Northern Ireland. This process is ongoing, and it will be important to keep track of developments.

Separately, Conservative Peer Lord Holmes of Richmond re-introduced a Private Members' Bill, the Artificial Intelligence (Regulation) Bill, in March 2025. This bill is identical to the previous version introduced in the last parliamentary term. This compact document advocates for the formation of a standalone AI regulator and the new role of an AI officer for organizations that develop, deploy or use AI. Crucially, it is rare for Private Members' Bills to be passed into law. Instead, they are often intended to provide constructive policy recommendations or apply legislative pressure.

In January 2025, the Department of Science, Innovation and Technology published a voluntary Code of Practice for the Cyber Security of AI that sets the "baseline cyber security principles to help secure AI systems and the organizations which develop and deploy them," protecting them from cyber risks arising from "data poisoning, model obfuscation, indirect prompt injection and operational differences associated with data management." This was accompanied by a practical implementation guide. The U.K. government plans to submit the code and guide to the European Telecommunications Standards Institute to "be used as the basis for a new global standard… and accompanying implementation guide."

The Government Digital Service, which sits within DSIT, is also establishing a new Responsible AI Advisory Panel to help shape the U.K.'s approach to "building responsible AI in the public sector." The panel aims to ensure safe, ethical and responsible AI development by bringing together AI expertise from a wide range of organizations with a diverse skillset.

The U.K. government also launched an AI playbook in February 2025 to offer guidance and support to government departments and public sector organizations to safely, effectively, and responsibly harness the power of a wider range of AI technologies.


Wider regulatory environment

While the U.K. does not have legislation specifically governing AI, various broader statutes and case law applies to the area.

  • expand_more

  • expand_more

  • expand_more

  • expand_more

  • expand_more

  • expand_more


Full series overview

Articles in this series, co-sponsored by HCLTech, dive into the laws, policies, broader contextual history and developments relevant to AI governance across different jurisdictions. The selected jurisdictions are a small but important snapshot of distinct approaches to AI governance regulation in key global markets.

Each article provides a breakdown of the key sources and instruments that govern the strategic, technological and compliance landscape for AI governance in the jurisdiction through voluntary frameworks, sectoral initiatives or comprehensive legislative approaches.

Global AI Governance Law and Policy

Jurisdiction Overviews 2025

The overview page for this series can be accessed here.


Additional resources

  • expand_more

  • expand_more



Approved
CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 2

Submit for CPEs