Resource Center / Resource Articles / Global AI Governance Law and Policy: UK
Global AI Governance Law and Policy: UK
This article is part of a five-part series co-sponsored by OneTrust. The full series can be accessed here.
Published: March 2024
Contributors:
Navigate by Topic
Though the U.K. does not have any regulations specific to the governance of AI, it does have an AI Safety Institute and a variety of relevant principles-based soft law and policy initiatives, as well as binding regulation in other domains like data protection and online safety. Moreover, the development, integration and responsible governance of AI is a strategic priority across U.K. policymaking and regulatory capacity building.
History and context
The U.K. has long played an important role in the development of AI. British mathematician Ada Lovelace and computer scientist Alan Turing, the "father of theoretical computing," are widely regarded as inspiring much of the development of AI. In the 1950s and '60s, the potential of AI-generated enthusiasm and expectation led to the formation of several major AI research centers in the U.K. at the universities of Edinburgh, Sussex, Essex and Cambridge. Even today, the U.K. is regarded as a center of expertise and excellence regarding AI research and innovation.
Fast forward to September 2021, when the U.K. government's National AI Strategy announced a 10-year plan "to make Britain a global AI superpower." That plan set the stage for ongoing consideration as to whether and how to regulate AI, noting, with emphasis, AI is not currently unregulated by virtue of other applicable laws. Since 2018, the prevailing view in U.K. law and policymaking circles has been that "blanket AI-specific regulation, at this stage, would be inappropriate" and "existing sector-specific regulators are best placed to consider the impact on their sector of any subsequent regulation which may be needed."
A consequence of the U.K. leaving the EU is that the EU AI Act — soon to enter into force — does not directly apply in the U.K. as it does to the remaining 27 EU member states. Indeed, the EU AI Act has accelerated and amplified independent U.K. policy development on whether, how and why AI should or could be regulated further and in ways more targeted than what exists via the application of existing laws to AI.
Tortoise Media's June 2023 Global AI Index, which benchmarks nations on their level of investment, innovation and implementation of AI, ranked the U.K. in fourth place, below the U.S., China and Singapore. In 2022, the U.K. ranked third. Tortoise Media commented the U.K. has an "edge in research and commercial investment."
Approach to regulation
As general context, there is no draft or current U.K. legislation that specifically governs AI. Instead, the U.K. government has focused its efforts on soft law initiatives, e.g., cross-sector regulatory guidelines, to adopt an incremental, pro-innovation approach to AI regulation.
White paper on AI regulation and consultation response
In March 2023, the U.K. government published its white paper A Pro-Innovation Approach to AI Regulation for consultation, setting out policy proposals regarding future regulation.
Notably, the document does not define AI or an AI system but explains the concepts are characterized by adaptivity and autonomy. It goes on to describe that the U.K.'s AI regulatory framework should be based on the following five cross-sectoral nonbinding principles: safety, security and robustness; appropriate transparency and explainability; fairness; accountability; and contestability and redress. Finally, the white paper does not propose the creation of a new AI regulator, instead it advocates for the empowerment of existing regulators.
In February 2024, the U.K. government published its response to the white paper's consultation, which largely reaffirmed its prior proposals with one important caveat. The response indicated future legislation is likely to "address potential AI-related harms, ensure public safety, and let us realize the transformative opportunities that the technology offers." However, the government will only legislate when it is "confident that it is the right thing to do."
UK regulator guidelines
- Data protection: In March 2023, the U.K. Information Commissioner's Office updated its Guidance on AI and Data Protection. In January 2024, it also launched a Consultation Series on Generative AI and Data Protection, which is scheduled to close 12 April.
- Competition and markets: In September 2023, the Competition and Markets Authority released its Initial Report on AI Foundation Models.
- Medicines and health care: In October 2023, the Medicines and Healthcare Regulatory Agency published updated guidance on Software and AI as a Medical Device.
- Other: The Office of Gas and Electricity Markets and the Civil Aviation Authority are working on AI strategies to be published later in 2024. The Health and Safety Executive, the Equality and Human Rights Commission, Office of Communications, and the Financial Conduct Authority are also anticipated to release guidelines on AI use within their respective sectors in due course.
Other UK AI governmental/parliamentary initiatives
As exemplified by the following two initiatives, the U.K. government has honed its policy focus on AI safety.
First, it organized the first-ever International AI Safety Summit in November 2023 at Bletchley Park, gathering representatives from industry, policy, academia and civil society. The summit resulted in the Bletchley Declaration on fostering international collaboration on safe frontier AI development, which was signed by representatives from over 25 territories including, China, the EU, U.K. and U.S.
Second, it set up an AI Safety Institute staffed mostly by technical experts with the mission of minimizing "surprise to the UK and humanity from rapid and unexpected advances in AI." The institute intends to achieve this "by developing the sociotechnical infrastructure needed to understand the risks of advanced AI and enable its governance."
Separately, in November 2023, Conservative Peer Lord Holmes of Richmond introduced a Private Members' Bill, the Artificial Intelligence (Regulation) Bill. This compact document advocates for the formation of a standalone AI regulator and the new role of an AI officer for organizations that develop, deploy or use AI.
Crucially, it is rare for Private Members' Bills to be passed into law. Therefore, they are often intended to provide constructive policy recommendations or apply legislative pressure.
Wider regulatory environment
While the U.K. does not have legislation specifically governing AI, various broader statutes and case laws apply to the area. Some of the most impactful are highlighted in this section.
-
expand_more
Data protection
From a data protection perspective, the U.K. legal system comprises the U.K. General Data Protection Regulation, the Data Protection Act 2018 and the Privacy and Electronic Communications (EC Directive) Regulations 2003 (SI 2003/2426). There is also, of course, proposed reform via the Data Protection and Digital Information Bill, which is still in draft form and pending legislative negotiations. In addition, the EU GDPR has extra-territorial effect and likely applies to U.K. entities that process personal data relating to EU individuals.
The use of AI systems raises many compliance questions under U.K. data protection law, from establishing the roles of the data processing entities to ensuring the accuracy of personal data inputs and outputs to adhering to profiling and automated decision-making requirements.
-
expand_more
Intellectual property
In terms of intellectual property rights, the main types in the U.K. are registered and unregistered trademarks, patents, registered and unregistered designs, copyright, and trade secrets. The key U.K. IP statutes are the Patents Act 1977, Copyright, Designs and Patents Act 1988, and Trade Marks Act 1994.
Copyright questions are relevant to AI, given the training data may include copyright works, e.g., books, news, academic articles, web pages, photographs or paintings, and the AI system itself may create works that could potentially be protected under copyright.
In January 2023, Getty Images commenced U.K. court proceedings against Stability AI, claiming copyright infringement. Getty alleged Stability AI "scraped" millions of images from its websites without consent and used them unlawfully to train and develop its deep-learning AI model, thereby infringing Getty's copyright works.
Patent questions are also very relevant in this area, including whether an AI system can be considered an "inventor" for the purposes of the Patents Act 1977. In December 2023, the U.K. Supreme Court dismissed an appeal from Stephen Thaler, affirming the Comptroller-General of Patents, Designs and Trademarks' decision that a machine, which embodies an AI system, could not be an inventor under the law.
-
expand_more
Online safety
In October 2023, the Online Safety Act entered into law. It is intended to address two fundamental issues: the tackling of illegal/harmful online content and the protection of children online. It does so by imposing obligations, known under the law as "duties of care," on a sliding scale for a broad range of online entities, e.g., social media networks, search engines, video-sharing platforms, and marketplaces or listing providers.
The OSA's substantive obligations are pending secondary legislation, consultations and regulatory codes of practice, and so are not yet in force. That said, the law imposes extensive requirements that will impact AI systems, e.g., the monitoring and takedown of AI generated content that could be illegal or harmful.
-
expand_more
Employment
From an employment law perspective, the Equality Act 2010 prohibits discrimination by employers on the basis of any protected characteristics, such as age, disability, race or sex.
Due to the nature of its training data and other factors, unless mitigation steps are taken, some AI systems have the potential to exhibit biases. The use of such systems for recruiting decisions and/or performance management could therefore raise U.K. employment law compliance considerations.
-
expand_more
Consumer protection
In terms of consumer protection, the U.K. has a patchwork of laws including the Consumer Rights Act 2015 and the Consumer Protection from Unfair Trading Regulations 2008 (SI 2008/1277). These interact with numerous AI use cases, e.g., the information or guidance provided by chatbots to consumers or the sales contract terms between an organization and consumer for AI-related products and services.
-
expand_more
Product liability
From a product liability perspective, the key source of law is Part 1 of the Consumer Protection Act 1987. This implements the strict liability regime set out in the EU Product Liability Directive. In addition, individuals may have rights under the common law of tort and so complex issues are likely to arise regarding duties of care and liability assessments for defective AI systems.
Much of the U.K.'s approach to AI regulation can be classified as "latest developments." Looking ahead, there will be a steady drumbeat of regulatory and policy action as part of the U.K. government's roadmap for implementing its approach to AI regulation. Amid that drumbeat are the following commitments and anticipated milestones:
Spring 2024:
- The U.K. government will establish a steering committee for a new central governmental function to support regulatory capabilities and coordination on AI governance. The steering committee will consist of representatives from the government and key regulators, including those that are members of the DRCF.
- The U.K. government will launch targeted consultation on a cross-economy AI risk register and regulatory framework assessment.
- The DRCF AI and Digital Hub pilot will be launched. The pilot is intended to support AI innovators with queries concerning cross-regulatory AI and digital issues. Questions will be directed to the four DRCF member regulators through a single point of access and will receive tailored responses.
- The first International Report on the Science of AI Safety will be published.
- A call for views to obtain further input on securing AI models, including a potential code of practice for cybersecurity of AI, based on NCSC guidelines, will be released.
During 2024:
- The U.K. government is phasing in a mandatory requirement for central government departments to use the Algorithmic Transparency Recording Standard.
By end of 2024:
- The U.K. government will publish an update on the voluntary responsibilities of highly capable general purpose AI system developers, relating to AI safety and responsible capability scaling policies.
- The U.K. government will launch the AI Management Essentials scheme to set a minimum good practice standard for companies selling AI products and services.
By 30 April 2025:
- Key U.K. regulators will publish updates on their strategic approaches to AI.
Additionally, sharpened regulatory oversight and perhaps even enforcement related to AI governance are likely to shape the U.K. AI governance ecosystem.
Additional resources
-
expand_more
General AI resources
-
expand_more
Privacy and AI governance resources
Global AI Governance Law and Policy: Jurisdiction Overviews
The overview page for this series can be accessed here. The full series is additionally available here in PDF format.