This article is part of a five-part series co-sponsored by OneTrust. The full series can be accessed here.

Published: March 2024Click To View (PDF)

Navigate by Topic

Though the U.K. does not have any regulations specific to the governance of AI, it does have an AI Safety Institute and a variety of relevant principles-based soft law and policy initiatives, as well as binding regulation in other domains like data protection and online safety. Moreover, the development, integration and responsible governance of AI is a strategic priority across U.K. policymaking and regulatory capacity building.

History and context

The U.K. has long played an important role in the development of AI. British mathematician Ada Lovelace and computer scientist Alan Turing, the "father of theoretical computing," are widely regarded as inspiring much of the development of AI. In the 1950s and '60s, the potential of AI-generated enthusiasm and expectation led to the formation of several major AI research centers in the U.K. at the universities of Edinburgh, Sussex, Essex and Cambridge. Even today, the U.K. is regarded as a center of expertise and excellence regarding AI research and innovation.

Fast forward to September 2021, when the U.K. government's National AI Strategy announced a 10-year plan "to make Britain a global AI superpower." That plan set the stage for ongoing consideration as to whether and how to regulate AI, noting, with emphasis, AI is not currently unregulated by virtue of other applicable laws. Since 2018, the prevailing view in U.K. law and policymaking circles has been that "blanket AI-specific regulation, at this stage, would be inappropriate" and "existing sector-specific regulators are best placed to consider the impact on their sector of any subsequent regulation which may be needed."

A consequence of the U.K. leaving the EU is that the EU AI Act — soon to enter into force — does not directly apply in the U.K. as it does to the remaining 27 EU member states. Indeed, the EU AI Act has accelerated and amplified independent U.K. policy development on whether, how and why AI should or could be regulated further and in ways more targeted than what exists via the application of existing laws to AI.

Tortoise Media's June 2023 Global AI Index, which benchmarks nations on their level of investment, innovation and implementation of AI, ranked the U.K. in fourth place, below the U.S., China and Singapore. In 2022, the U.K. ranked third. Tortoise Media commented the U.K. has an "edge in research and commercial investment."


Approach to regulation

As general context, there is no draft or current U.K. legislation that specifically governs AI. Instead, the U.K. government has focused its efforts on soft law initiatives, e.g., cross-sector regulatory guidelines, to adopt an incremental, pro-innovation approach to AI regulation.

White paper on AI regulation and consultation response

In March 2023, the U.K. government published its white paper A Pro-Innovation Approach to AI Regulation for consultation, setting out policy proposals regarding future regulation.

Notably, the document does not define AI or an AI system but explains the concepts are characterized by adaptivity and autonomy. It goes on to describe that the U.K.'s AI regulatory framework should be based on the following five cross-sectoral nonbinding principles: safety, security and robustness; appropriate transparency and explainability; fairness; accountability; and contestability and redress. Finally, the white paper does not propose the creation of a new AI regulator, instead it advocates for the empowerment of existing regulators.

In February 2024, the U.K. government published its response to the white paper's consultation, which largely reaffirmed its prior proposals with one important caveat. The response indicated future legislation is likely to "address potential AI-related harms, ensure public safety, and let us realize the transformative opportunities that the technology offers." However, the government will only legislate when it is "confident that it is the right thing to do."

UK regulator guidelines

  • Data protection: In March 2023, the U.K. Information Commissioner's Office updated its Guidance on AI and Data Protection. In January 2024, it also launched a Consultation Series on Generative AI and Data Protection, which is scheduled to close 12 April.
  • Competition and markets: In September 2023, the Competition and Markets Authority released its Initial Report on AI Foundation Models.
  • Medicines and health care: In October 2023, the Medicines and Healthcare Regulatory Agency published updated guidance on Software and AI as a Medical Device.
  • Other: The Office of Gas and Electricity Markets and the Civil Aviation Authority are working on AI strategies to be published later in 2024. The Health and Safety Executive, the Equality and Human Rights Commission, Office of Communications, and the Financial Conduct Authority are also anticipated to release guidelines on AI use within their respective sectors in due course.

As shown above, U.K. regulators have actively prepared or published guidelines related to their own sectors. There has also been some cross-functional work on AI issues, such as with the Digital Regulation Cooperation Forum, which consists of the ICO, CMA, Ofcom and FCA and is responsible for ensuring greater regulatory cooperation on online issues.

Other UK AI governmental/parliamentary initiatives

As exemplified by the following two initiatives, the U.K. government has honed its policy focus on AI safety.

First, it organized the first-ever International AI Safety Summit in November 2023 at Bletchley Park, gathering representatives from industry, policy, academia and civil society. The summit resulted in the Bletchley Declaration on fostering international collaboration on safe frontier AI development, which was signed by representatives from over 25 territories including, China, the EU, U.K. and U.S.

Second, it set up an AI Safety Institute staffed mostly by technical experts with the mission of minimizing "surprise to the UK and humanity from rapid and unexpected advances in AI." The institute intends to achieve this "by developing the sociotechnical infrastructure needed to understand the risks of advanced AI and enable its governance."

Separately, in November 2023, Conservative Peer Lord Holmes of Richmond introduced a Private Members' Bill, the Artificial Intelligence (Regulation) Bill. This compact document advocates for the formation of a standalone AI regulator and the new role of an AI officer for organizations that develop, deploy or use AI.

Crucially, it is rare for Private Members' Bills to be passed into law. Therefore, they are often intended to provide constructive policy recommendations or apply legislative pressure.


Wider regulatory environment

While the U.K. does not have legislation specifically governing AI, various broader statutes and case laws apply to the area. Some of the most impactful are highlighted in this section.

  • expand_more

    Data protection

  • expand_more

    Intellectual property

  • expand_more

    Online safety

  • expand_more

    Employment

  • expand_more

    Consumer protection

  • expand_more

    Product liability


Latest developments

Much of the U.K.'s approach to AI regulation can be classified as "latest developments." Looking ahead, there will be a steady drumbeat of regulatory and policy action as part of the U.K. government's roadmap for implementing its approach to AI regulation. Amid that drumbeat are the following commitments and anticipated milestones:

Spring 2024:

  • The U.K. government will establish a steering committee for a new central governmental function to support regulatory capabilities and coordination on AI governance. The steering committee will consist of representatives from the government and key regulators, including those that are members of the DRCF.
  • The U.K. government will launch targeted consultation on a cross-economy AI risk register and regulatory framework assessment.
  • The DRCF AI and Digital Hub pilot will be launched. The pilot is intended to support AI innovators with queries concerning cross-regulatory AI and digital issues. Questions will be directed to the four DRCF member regulators through a single point of access and will receive tailored responses.
  • The first International Report on the Science of AI Safety will be published.
  • A call for views to obtain further input on securing AI models, including a potential code of practice for cybersecurity of AI, based on NCSC guidelines, will be released.

During 2024:

  • The U.K. government is phasing in a mandatory requirement for central government departments to use the Algorithmic Transparency Recording Standard.

By end of 2024:

  • The U.K. government will publish an update on the voluntary responsibilities of highly capable general purpose AI system developers, relating to AI safety and responsible capability scaling policies.
  • The U.K. government will launch the AI Management Essentials scheme to set a minimum good practice standard for companies selling AI products and services.

By 30 April 2025:

  • Key U.K. regulators will publish updates on their strategic approaches to AI.

Additionally, sharpened regulatory oversight and perhaps even enforcement related to AI governance are likely to shape the U.K. AI governance ecosystem.


Additional resources


Global AI Governance Law and Policy: Jurisdiction Overviews

The overview page for the full series can be accessed here.

Published

Coming Soon

  • Part 3: EU
  • Part 4: Canada
  • Part 5: US


Approved
CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 2

Submit for CPEs