The U.K. government is seeking feedback on what laws are standing in the way of artificial intelligence growth and adoption in the country through an unusual format.
The AI Growth Labs proposal aims to create sandboxes where companies can test their products in a regulatory safe environment and receive feedback on how to come into compliance before going to market. But instead of testing to see how those products fit into current regulations, the sandboxes are testing the rules to see what might be hampering innovation, according to the press release.
The feedback from those sandboxes could result in "time-limited, targeted modifications to specified sectoral regulations, if they are hindering AI adoption" via secondary legislation, according to the call for proposals, a mechanism which allows the government to put forward legal changes with limited input from Parliament. The effort comes as the U.K. is studying how to improve its economy, with AI adoption seen as a key pillar to doing so.
"To deliver national renewal, we need to overhaul the old approaches which have stifled enterprise and held back our innovators," Technology Secretary Liz Kendall said in a statement. "We want to remove the needless red tape that slows progress so we can drive growth and modernize the public services people rely on every day. This isn’t about cutting corners — it's about fast-tracking responsible innovations that will improve lives and deliver real benefits."
Eduardo Ustaran, AIGP, CIPP/E, the global co-lead of law firm Hogan Lovells' Privacy and Cybersecurity practice, said the measure is unique because the government does not appear to be looking to introduce new primary legislation, but to test existing laws and see what does not work. The sandboxes would focus on areas like health care, professional services, transport and advanced manufacturing.
"The pitch from the government is, 'We already have some laws. The laws are there to do their job, and until we see a real need to come up with something new, we should just rely on the existing laws that should be technologically neutral and still be able to handle the challenges of AI,'" he said.
Ustaran also said the labs should serve as a message to regulators that the government is interested in understanding what they can do to lessen regulatory burden within their own powers.
"Regulators are now being tested from the outset in terms of what they think they can contribute to this and looking at it from the point of view of how to make progress, not how to make things more difficult," he said.
The proposal is part of the U.K. government's overall Plan for Change, which vows to "Drive innovation, investment and the adoption of technology to seize the opportunities of a future economy, from AI to net zero." The call for evidence takes the stance that "Lots of our regulation is technology neutral and can still apply effectively – but not all" and many of the laws "needlessly assume activity will be undertaken by a person not AI."
To study how that should be changed, the government said it would monitor sandbox participants closely under live market conditions. It aims to maintain public trust and safety by assuring any legislation which stems from it would include "red lines" rules that would not be changed. The government named consumer protections, safety provisions, fundamental rights, workers' protections and intellectual property rights as "red lines" but said it would seek feedback on what other categories should be considered.
But the government should be especially cautious around what regulations it seeks to limit enforcement on or remove, said Eleonor Duhs, CIPP/E, a partner and head of law firm Bates Wells' data and privacy division who worked as the U.K. government's lead negotiator in the General Data Protection Regulation.
She said the approach of "switching off" enacted legislation is part of a bigger trend in Parliament, citing a 2021 House of Lords report examining an imbalance between the legislative body and the government, thanks to maneuvers like skeleton laws and disguised legislation.
These actions, the report found, often indicate that "it is increasingly difficult for Parliament to understand what legislation will mean in practice and to challenge its potential consequences on people affected by it in their daily lives."
Duhs said any attempts to not enforce laws targeted in the sandbox could run up against problems with the U.K.'s constitution.
"So firstly, they turn it off during the sandbox, and then they might decide to turn it off permanently via secondary legislation, which doesn't get very much parliamentary scrutiny at all," she said.
It could also hurt the U.K.'s economic standing, Duhs said. The country's data protection law requires a human in the loop when a significant or legal decision is being made about a person. There are also international data adequacy laws the U.K. follows, notably with the European Union, that are important to its trading relationship with the EU; any changes that take the country out of compliance there could negatively impact it in the long run, Duhs said.
"I think it's really important to consider the fact that, yes, AI may bring economic benefits, but turning off standards which are agreed at an international level could actually hurt the economy through a slightly different route," she said.
Ustaran, with Hogan Lovells, said there is plenty of room for the government to decide what action it wants to take, and that a consultation gives people of all views a chance to weigh in on what AI law should look like.
"Ultimately," he said, "they'll need to consider whether they want to push the pro-innovation agenda almost to the extreme, or do they want to be a little bit more guarded?"
Caitlin Andrews is a staff writer for the IAPP.
