Clashing opinions between the U.K.'s government and its upper legislative chamber on what artificial intelligence regulation should look like underscore the challenges facing regulators in a post-Brexit world.
The U.K. government made clear that courting innovation while announcing 6 Feb. new funding and measures aimed at "more agile AI regulation" to come. The U.K. Department for Science, Innovation and Technology announced more than 100 million GBP to "support regulators and advance research and innovation on AI" while indicating the potential for "introducing future targeted, binding requirements for most advanced general-purpose AI systems." The lion's share of the money will go to creating nine new research hubs and a UK-U.S. joint research venture. Approximately 10 million GBP will go to helping bolster the skillset of current regulators overseeing AI governance.
While small compared to investment in research and innovation, the money awarded to regulators will build upon prior efforts from agencies such as the Office of Communications, the Information Commissioner's Office and the Competition and Markets Authority, according to the report.
The latest plan stems from consultations on a white paper the U.K. put out in March 2023 that generated comments from more than 200 companies, academics, nonprofits and industry groups.
Secretary of State for Science, Innovation and Technology Michelle Donelan described the U.K.'s "innovative approach" as one that recognizes "AI's potential to transform our public services and the economy for the better."
"AI is moving fast, but we have shown that humans can move just as fast. By taking an agile, sector-specific approach, we have begun to grip the risks immediately, which in turn is paving the way for the UK to become one of the first countries in the world to reap the benefits of AI safely," Donelan added.
Differing approaches
Last year's DSIT white paper suggested AI risks have not yet "matured" enough to warrant legislative action. Instead, it argued the U.K.'s regulatory system is flexible enough to respond to risks when they emerge. Legislation will be considered if voluntary measures are deemed insufficient, according to the foreword.
A report from the U.K. House of Lords Communications and Digital Committee issued 2 Feb. took a different stance. Members urged the government to not get caught up in "improbable" risks, it pointed to issues such as cyberattacks, disinformation and copyright infringement by large language model developers as areas where immediate action should be taken.
"These issues will be of huge significance over the coming years and we expect the Government to act on the concerns we have raised and take the steps necessary to make the most of the opportunities in front of us," said House of Lords Member Tina Stowell, who chairs the committee.
Despite the differences, Dani Dhiman, the AI policy lead for the trade association TechUK, said alignment on innovation and safety from the government and lawmakers is a positive first step.
"This will be welcomed by industry, and it will mean arguments over which model we want to use to regulate can be put aside, allowing us to focus on delivering the clarity businesses and consumers are calling for so they can confidently adopt new AI-powered products and services," she said.
But Ada Lovelace Institute Associate Director Michael Birtwistle described the government's approach as "all eyes, no hands," saying the regulatory funding and the powers given to regulators fell short. The reliance on voluntary commitments to best practices from entities like Google, Microsoft and Meta — all of which signed voluntary commitments at the Bletchley Park AI summit last fall — is insufficient to ensure accountability, he said.
"We shouldn't be waiting for companies to stop cooperating or for a Post Office-style scandal to equip the Government and regulators to react," Birtwistle said. "Ministers should look to capitalise on the momentum of the last year and bring forward binding legislation to prevent and react effectively to the risks of AI as soon as possible."
Building on prior efforts
There are no hard and fast regulations around AI technology itself yet in the U.K. While other jurisdictions — notably the EU and the U.S. — move toward a more direct response, the DSIT said "future targeted, binding requirements for most advanced general-purpose AI systems" could be considered down the line.
Still, regulators have existing laws around privacy and copyright to leverage in their efforts and already have laid the groundwork for AI enforcement. The agencies targeted for the 10 million GBP funding will be required to update the government on their AI enforcement strategies by 30 April.
Those regulators have offered support for the plan outlined in the 2023 white paper. How they deploy the additional funding remains to be seen.
An ICO spokesperson said the agency looked forward to working with its counterparts to "ensure effective, coherent regulation of AI across the economy," but did not offer details on how it might improve regulation.
The office has guidance on AI and data protection and launched a consultation on how those laws should apply to the technology. And Commissioner John Edwards said late last year his agency will not tolerate AI business which do not comply with existing data protection laws.
"2024 cannot be the year that consumers lose trust in AI," he said.
The CMA outlined a series of principles to guide its efforts around foundation models regulation, including making sure data is easily accessible and consumers are aware of products' capabilities, in a report published September 2023. It promised to update its plan early 2024.
Ofcom also did not offer comment, but Group Director of Strategy and Research Yih-Choung Teh told Parliament's Communications and Digital Committee last year that the paper's approach toward AI transparency, fairness and security somewhat aligned with its regulatory scope, but there were limits to how those principles interacted with the agency's current functions.
Political change could lead to further action down the road. The Labour Party is poised to take over this election year, with leadership within that party promising to force greater transparency from AI tech companies, according to the Guardian.