As U.K. regulators navigate the accelerated use of agentic artificial intelligence across business sectors, government, industry and civil society stakeholders are all left wondering what a potential enforcement regime might look like.

The U.K. Digital Regulation Cooperation Forum launched its Thematic Innovation Hub series on agentic AI, seeking to begin a transparency initiative around their collective work. The series builds on the DRCF's AI and Digital Hub pilot, which gathered organizational feedback on enforcement efforts to help shape future guidance.

"Right now, we're in a listening and learning phase, and we're analyzing a wide range of those perspectives to really understand what the opportunities, challenges and what practical support innovators need most," DRCF Head of Innovation and Governance Sarah Meyers said. "Agentic AI represents a fundamental shift in how systems act autonomously and interact with humans. Getting this right means striking the right balance between innovation, trust and accountability."

The state of play

The DRCF serves as a collaboration between the Competition and Markets Authority, Financial Conduct Authority, Information Commissioner's Office and the Office of Communications, allowing for a holistic and harmonized approach to digital enforcement. So far, regulators have carved their own paths for addressing agentic AI. 

Ofcom previously released its strategic plan for AI, leaning on existing regulations while ensuring organizations have the opportunity to experiment with technologies via its SONIC Labs. The plan also works to mitigate risks associated with underage users' interactions with AI technology through the application of the Online Safety Act.

In addition to Ofcom's efforts, the ICO brought an enforcement action against Snap's chatbot use and turned the action into guidance on how to ensure AI training practices are compliant with the Data Protection Act and the U.K. General Data Protection Regulation. 

According to Meyers, stakeholder responses to the DRCF's previous AI pilot found organizations had a "preference for an outcome-focused and risk-based" regulatory approach that builds on existing U.K. laws and regulations. The DRCF also found organizations favored enforcement that "avoids agentic-specific primary legislation and targets real-world impact rather than the technology, but with greater clarity about how current and forthcoming rules apply to agentic AI and how different regulators will coordinate," she said.

University of Essex Professor Lorna McGregor highlighted the DRCF's approach of using existing laws, noting regulators should focus on how current regulations match emerging technologies.

"You've got to be clear about how the law works. But in my engagement with a lot of civil society groups … there's a lack of trust because of a perceived lack of enforcement," McGregor said. "It looks like people are doing nothing. It looks like the government's doing nothing. So with a stance that says, 'let's use existing law, let's be clear, let's be proportionate,' we have to be effective as well."

Stakeholders claimed a patchwork of enforcement decisions could disrupt innovation, though regulatory sandboxes could allow for a broader look into AI tools, human oversight requirements and red-teaming standards.

Centre for Information Policy Leadership Privacy and Data Policy Manager Kathleen McGrath highlighted the EU's mandatory coordinated regulatory sandboxes under the EU AI Act. McGrath noted there is space in the digital landscape to "look at the broader context of other regulators, so your financial regulators, and obviously your online safety and competition. … And I think that's somewhere where the DRCF could do some really great leadership in terms of experimentation."

DRCF's next steps

The DRCF will release more information detailing the results of its call for views in January, giving organizations a clearer picture of regulators' priorities and areas where they expect additional diligence. A broader horizon-scanning report examining agentic AI's long-term regulatory implications is also planned for 2026.

"There is clearly a shared ambition to create a regulatory environment that supports responsible innovation, drives economic growth, ensures the UK remains a leader in the digital economy, while at the same time, of course, promoting trust and protecting people, and that indeed is consistent with the DRCF's three-year vision," DRCF CEO Kate Jones said.

The Thematic Innovation Hub will continue hosting sessions throughout the year, allowing regulators to refine their guidance as emerging risks and organizational concerns become clearer. The DRCF emphasized that continuous engagement will be central to building an enforcement model that keeps pace with autonomous systems.

Lexie White is a staff writer for the IAPP.