Resource Center / Resource Articles / Global AI Governance Law and Policy: UK
Global AI Governance Law and Policy: United Kingdom
This article, part of a series co-sponsored by HCLTech, analyzes the laws, policies, and broader contextual history and developments relevant to AI governance in the United Kingdom. The full series can be accessed here.
Published: October 2025
Contributors:

Navigate by Topic
Though the U.K. does not have any legislation specific to the regulation or governance of artificial intelligence, it does have an AI Security Institute and a variety of relevant principles-based soft law and policy initiatives, as well as binding regulations in other domains like data protection and online safety. The AI Security Institute, which started life as the AI Safety Institute, launched at the world's first global AI Safety Summit held in the U.K. in November 2023. However, in February 2025 AISI changed its name to reflect its change in focus to serious AI risks with security implications, such as using AI for developing weapons, as opposed to safety issues and risks, e.g., bias and discrimination.
The U.K. has taken a decentralized, principles-based approach with cross-sector regulators expected to set binding guidelines and enforce the core principle set by the U.K. government. The development, integration and responsible governance of AI is a strategic priority across U.K. policymaking and regulatory capacity building with a focus on enabling existing regulators to enforce their core principles. An AI bill was announced in the King's Speech in July 2025, but it would only regulate the most powerful AI models. The timing and scope of such a bill has since changed, with no formal bill expected until the next King's Speech, reportedly in May 2026.
The U.K. has long played an important role in the development of AI. In the 1950s and 60s, the potential of AI-generated enthusiasm and expectation led to the formation of several major AI research centers in the U.K. at the universities of Edinburgh, Sussex, Essex and Cambridge. Even today, the U.K. is regarded as a center of expertise and excellence regarding AI research and innovation.
Fast forward to September 2021, when the U.K. government's National AI Strategy announced a 10-year plan "to make Britain a global AI superpower." That plan set the stage for ongoing consideration as to whether and how to regulate AI, noting, with emphasis, AI is not currently unregulated by virtue of other applicable laws. Since 2018, the prevailing view in U.K. law and policymaking circles has been that "blanket AI-specific regulation, at this stage, would be inappropriate" and "existing sector-specific regulators are best placed to consider the impact on their sector of any subsequent regulation which may be needed."
A consequence of the U.K. leaving the EU is that the EU AI Act does not directly apply in the U.K. as it does to the remaining 27 EU member states. However, the act does have extra-territorial scope that will certainly impact U.K. businesses. Indeed, the EU AI Act has accelerated and amplified independent U.K. policy development on whether, how and why AI should or could be regulated further and in ways more targeted than what exists, via the application of existing laws to AI.
The U.K. continues to forge its own path, instead focusing on flexibility, innovation and sector-specific regulatory expertise when it comes to AI regulation. The aim is to take a proportionate approach to regulation, with the government tracking AI development and only legislating where they deem it necessary.
In Tortoise Media's September 2024 Global AI Index, which benchmarks nations on their level of investment, innovation and implementation of AI, the U.K. maintained its ranking in fourth place, below the U.S., China and Singapore. The U.K. is "strong on commercial AI" and research but other countries are catching up fast, e.g., France, currently in fifth place, "now outperforms [the U.K.] on open-source [large language model] development and in other key areas including public spending and computing." This is, therefore, likely one of the many reasons why the U.K. agreed to the Tech Prosperity Deal with the U.S., obtaining an investment of over USD41 billion from U.S. businesses into U.K. AI infrastructure.
As general context, there is no draft or current U.K. legislation that specifically governs AI, except for a Private Member's Bill in the House of Lords, although such bills rarely become law. Instead, the U.K. government has relied on the existing body of legislation, which doesn't specifically regulate AI but undoubtedly applies to its development and deployment. For instance, the U.K. General Data Protection Regulation and Data Protection Act 2018 apply to AI. The government has also focused its efforts on soft law initiatives, e.g., cross-sector regulatory guidelines, to adopt an incremental, pro-innovation approach to AI regulation.
As already mentioned, an AI bill was announced in July 2024. However, due to the protracted legislative passage of the Data (Use and Access) Act 2025 — which was held up by unsuccessful attempts to include provisions relating to the use of copyright material to train AI — new assurances were sought on AI and copyright. The act includes a requirement for the secretary of state to report on the use of copyright works in the development of AI systems. The secretary of state must also report on the economic impact of the policy options proposed in the copyright and AI consultation paper by 19 March 2026. Any AI bill, expected in the second half of 2026 at the earliest, will likely deal with copyright matters as well as the most powerful AI models.
White paper on AI regulation and consultation response
In March 2023, the former U.K. Conservative party government published its white paper "A pro-innovation approach to AI regulation" for consultation, setting out policy proposals regarding future regulation.
Notably, the document does not define AI or an AI system but explains the concepts are characterized by adaptivity and autonomy, aligning with commonly accepted definitions of AI, as used in the Organisation for Economic Co-operation and Development and the EU's AI Act definitions of an AI system. It goes on to describe that the U.K.'s AI regulatory framework should be based on the following five cross-sectoral nonbinding principles: safety, security and robustness; appropriate transparency and explainability; fairness; accountability; and contestability and redress. Finally, the white paper does not propose the creation of a new AI regulator; instead, it advocates for the empowerment of existing regulators.
In February 2024, the government published its response to the white paper's consultation, which largely reaffirmed its prior proposals with one important caveat. The response indicated future legislation is likely to "address potential AI-related harms, ensure public safety, and let us realize the transformative opportunities that the technology offers." However, the government will only legislate when it is "confident that it is the right thing to do."
AI Opportunities Action Plan
In January 2025, the U.K. launched its AI Opportunities Action Plan, a strategic initiative aimed at leveraging the transformative capabilities of AI across multiple sectors, with the objective of establishing the U.K. as an AI superpower.
The plan is structured around three pillars: laying the foundations to enable AI and investing in AI infrastructure; promoting the adoption of AI particularly across the public sector, positioning it as the "largest customer and as a market shaper"; and securing the future of homegrown AI by positioning the U.K. as "national champions at the frontier of economically and strategically important capabilities."
However, the plan says very little about regulation and instead is much more focused on investment and infrastructure to encourage innovation and support and the growth of AI. As mentioned above, the recently announced Tech Prosperity Deal with the U.S. will fund some of the proposed investments into U.K. infrastructure.
More recent developments indicate the U.K. is moving away from the EU and its legislative approach and more towards the U.S. and an innovation approach with limited safeguards. With the economic rewards of AI at stake, this might not be entirely surprising. That said, the U.K., U.S. and EU are all signatories to the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law, meaning equality must be respected and discrimination prohibited throughout the AI lifecycle. The developments over the next twelve months merits close attention.
U.K. regulators have continued to produce guidelines related to their own sectors. There has also been some cross-functional work on AI issues, such as with the Digital Regulation Cooperation Forum, which consists of the Information Commissioner's Office, Competition and Markets Authority, Ofcom and the Financial Conduct Authority. The DRCF is responsible for ensuring greater regulatory cooperation on online issues.
Data protection
The ICO has been actively regulating how data is used in connection with AI for a long time, updating their AI and data protection guidance in March 2023. In November 2024, the ICO published their audit outcomes report and recommendations for AI providers and developers of AI-powered sourcing, screening and selection tools used in the recruitment process. Following their consultation series on generative AI and data protection, the ICO published their outcomes report in December 2024.
In June 2025, the ICO announced their AI and biometrics strategy to ensure that AI and biometric technology is developed and deployed lawfully, responsibly and in ways that maintain public trust. The ICO recognizes the significant opportunities for innovation such technology presents but emphasizes that it must be used in ways that protect personal data and uphold individual rights. A number of guidelines are expected on key topics as part of this strategy.
Following the enactment of the Data (Use and Access) Act 2025, there are also a number of ongoing consultations and revisions to guidance. ICO's automated decision-making and profiling guidance is of particular interest; the consultation for which is expected to launch in fall 2025, with final guidance expected to be published in spring 2026.
Online safety
In March 2025, Ofcom released its guidance on applying the Online Safety Act to generative AI and chatbots in the form of an open letter.
Competition and markets
The CMA and ICO issued a joint statement regarding foundation model approaches in March 2025. The joint statement expressed the organizations' ongoing commitment to collaborate on various initiatives that enhance user autonomy and control, ensure fair access to data, and distribute accountability appropriately across the foundation model supply chain.
Other UK AI governmental/parliamentary initiatives
Despite the U.K. government's continued presence at the global AI Action Summit, the focus has moved away from AI safety and towards "strengthening international action towards artificial intelligence." While safety is still on the agenda, it is no longer the primary focus of these summits. The U.K. government opted not to sign the official agreement produced at the February 2025 Paris AI Action Summit, expressing concerns around "global governance" and national security. The policy direction adopted at the next Summit, scheduled to be held in India in late 2025, will be of significant interest.
It is also worth noting that while only certain provisions of the EU AI Act currently apply in Northern Ireland, the European Commission has proposed to add the EU AI Act to the Windsor Framework, making it directly applicable as a whole to Northern Ireland. This process is ongoing, and it will be important to keep track of developments.
Separately, Conservative Peer Lord Holmes of Richmond re-introduced a Private Members' Bill, the Artificial Intelligence (Regulation) Bill, in March 2025. This bill is identical to the previous version introduced in the last parliamentary term. This compact document advocates for the formation of a standalone AI regulator and the new role of an AI officer for organizations that develop, deploy or use AI. Crucially, it is rare for Private Members' Bills to be passed into law. Instead, they are often intended to provide constructive policy recommendations or apply legislative pressure.
In January 2025, the Department of Science, Innovation and Technology published a voluntary Code of Practice for the Cyber Security of AI that sets the "baseline cyber security principles to help secure AI systems and the organizations which develop and deploy them," protecting them from cyber risks arising from "data poisoning, model obfuscation, indirect prompt injection and operational differences associated with data management." This was accompanied by a practical implementation guide. The U.K. government plans to submit the code and guide to the European Telecommunications Standards Institute to "be used as the basis for a new global standard… and accompanying implementation guide."
The Government Digital Service, which sits within DSIT, is also establishing a new Responsible AI Advisory Panel to help shape the U.K.'s approach to "building responsible AI in the public sector." The panel aims to ensure safe, ethical and responsible AI development by bringing together AI expertise from a wide range of organizations with a diverse skillset.
The U.K. government also launched an AI playbook in February 2025 to offer guidance and support to government departments and public sector organizations to safely, effectively, and responsibly harness the power of a wider range of AI technologies.
While the U.K. does not have legislation specifically governing AI, various broader statutes and case law applies to the area.
-
expand_more
Data protection
From a data protection perspective, the U.K. legal system comprises the U.K. General Data Protection Regulation, the Data Protection Act 2018, the Privacy and Electronic Communications (EC Directive) Regulations 2003 (SI 2003/2426) and the Data (Use and Access) Act 2025, which amends the preceding legislation and brings into force the U.K.'s long awaited data reforms. In addition, the EU GDPR has an extra-territorial effect and likely applies to U.K. entities that process personal data relating to EU individuals.
The use of AI systems raises many compliance questions and potential trade-offs under U.K. data protection law. These issues range from establishing the roles of the data processing entities to ensuring the accuracy of personal data used in training, while also adhering to requirements for profiling, automated decision-making, and the data minimization principle.
It is important to note the provisions in the Data (Use and Access) Act 2025 on automated decision-making are perhaps one of the biggest changes this act will bring into force. While the provisions were not in effect as of press time, that is expected to change in 2026. Unless special category data is involved, the U.K. will move from a regime based on prohibition with exceptions for automated decision-making — currently only limited lawful bases can be relied upon for ADM — to one that is based on permission but with safeguards. Broadly speaking, all and any lawful bases can be used for ADM provided that safeguards are in place, such as the right to human intervention, the ability to contest the decision, and transparency about the logic and criteria used.
In addition, the act also clarifies what is meant by solely automated, from a U.K. perspective, at least and therefore possibly brings a few decisions previously thought of as ADM out of scope. This change, if enacted, could be a significant game changer for the use of AI decision making tools in the U.K. —making it easier and possibly therefore more widespread.
-
expand_more
Intellectual property
The main types of intellectual property rights in the U.K. are registered and unregistered trademarks, patents, registered and unregistered designs, copyright, and trade secrets. The key U.K. intellectual property statutes are the Patents Act 1977, Copyright, Designs and Patents Act 1988, and Trade Marks Act 1994.
Copyright questions are relevant to AI, given the training data may include copyrighted works, e.g., books, news, academic articles, web pages, photographs or paintings. An AI system itself may create works that could potentially be protected under copyright, albeit there is uncertainty on this.
In January 2023, Getty Images began U.K. court proceedings against Stability AI, claiming infringement of various different intellectual property rights such as trademark, passing off, database rights, and multiple types of copyright. Getty alleged Stability AI scraped millions of images from its websites without consent and unlawfully used them to train and develop its deep-learning AI model, thereby infringing Getty's intellectual property. Getty dropped various claims at trial, with judgment expected sometime in fall 2025.
As discussed, the AI and copyright debate continues. Following assurances given so that the Data (Use and Access) Act could complete its parliamentary passage, it is expected that these issues will be addressed – initially in the secretary of state reports mentioned above, followed by the proposed AI (Regulation) Bill expected later in 2026.
Patent questions are also very relevant, including whether an AI system can be considered an "inventor" for the purposes of the Patents Act 1977. In December 2023, the U.K. Supreme Court dismissed an appeal from Stephen Thaler, affirming the Comptroller-General of Patents, Designs and Trademarks' decision that a machine, which embodies an AI system, could not be an inventor under the law. In September 2025, the High Court also dismissed Thaler's appeal against a U.K. Intellectual Property Office decision, ruling that it was not the judge's place to rule on "whether provision needs to be made requiring an AI-generated invention to be identified as such."
-
expand_more
Online safety
In October 2023, the Online Safety Act entered into law. The act is intended to address two fundamental issues: the tackling of illegal/harmful online content and the protection of children online. It does so by imposing obligations, known under the law as duties of care, on a sliding scale for a broad range of online entities, such as social media networks, search engines, video-sharing platforms and marketplaces or listing providers.
Many of the OSA's substantive obligations, such as duties to protect users from illegal content, child protection duties and age assurance measures, are now in force after a phased implementation. The law imposes extensive requirements that will impact AI systems, including the monitoring for and takedown of AI-generated content that could be illegal or harmful, an increasing challenge when the use of AI by the general public is becoming commonplace.
-
expand_more
Employment
From an employment law perspective, the Equality Act 2010 prohibits discrimination by employers on the basis of any protected characteristics, such as age, disability, race or sex.
Due to the nature of its training data and other factors, unless mitigation steps are taken, some AI systems have the potential to exhibit and/or perpetuate biases. The use of such systems for recruiting decisions and/or performance management could therefore raise U.K. employment law compliance considerations.
It is also important to note that the Trades Union Congress announced its pro-worker AI innovation strategy in August 2025, building on its draft AI (Regulation and Employment Rights) Bill and seeking to empower workers and promote responsible AI regulation where innovation is embraced alongside workers' rights. It remains to be seen whether the U.K. government's policy will be influenced by the TUC's work in this area.
-
expand_more
Consumer protection
In terms of consumer protection, the U.K. has a patchwork of laws including the Consumer Rights Act 2015 and the Consumer Protection from Unfair Trading Regulations 2008 (SI 2008/1277). These interact with numerous AI use cases, like the information or guidance provided by chatbots to consumers or the sales contract terms between an organization and consumer for AI-related products and services. The CMA is active in this area, working in conjunction with other regulators as appropriate.
-
expand_more
Product liability
From a product liability perspective, the key source of law is Part 1 of the Consumer Protection Act 1987. This implements the strict liability regime set out in the EU Product Liability Directive (2024/2853). In addition, individuals may have rights under the common law of tort. Complex issues are likely to arise regarding duties of care and liability assessments for defective AI systems. Early examples of how to tackle such issues were seen in connection with autonomous vehicles in the Automated and Electric Vehicles Act 2018.
Articles in this series, co-sponsored by HCLTech, dive into the laws, policies, broader contextual history and developments relevant to AI governance across different jurisdictions. The selected jurisdictions are a small but important snapshot of distinct approaches to AI governance regulation in key global markets.
Each article provides a breakdown of the key sources and instruments that govern the strategic, technological and compliance landscape for AI governance in the jurisdiction through voluntary frameworks, sectoral initiatives or comprehensive legislative approaches.
Global AI Governance Law and Policy
Jurisdiction Overviews 2025
The overview page for this series can be accessed here.
- Australia
- Canada
- China
- European Union
- India
- Japan
- Singapore
- South Korea
- United Arab Emirates
- United Kingdom
- United States
- Supplementary article: AI governance in the agentic era
-
expand_more
Additional AI resources
-
expand_more
Additional UK resources




Approved