TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

The Privacy Advisor | UK digital regulators discuss interagency enforcement, AI governance coordination Related reading: Kids' and teens' online privacy and safety: 8 compliance considerations

rss_feed

As the U.K. sets out to develop artificial intelligence regulations, as well as pending legislation for online safety, data security and privacy, a key question is what form its regulatory scheme will take to account for legislative changes and technological advances driven by AI.

The U.K. Information Commissioner's Office Executive Director, Regulatory Risk Stephen Almond said both the proposed Online Safety Bill and the Data Protection and Digital Information Bill, if passed, will present digital regulators with new privacy challenges, requiring further enforcement coordination efforts via the U.K. Digital Regulation Cooperation Forum.

"We're at the cusp of the new online safety regime being brought into law in the U.K., which has a variety of intersections with data protection law, and indeed, many synergies (as) privacy is an important element of safety online," Almond said during a 31 Aug. LinkedIn Live discussion moderated by IAPP Research and Insights Director Joe Jones. "Also, I'd be lying if I said there weren’t tensions as well. For example, actually achieving greater safety online might require you to collect more information about people in order to protect them than you might have done prior to the introduction of this regime."

As jurisdictions around the world begin to formulate regulations for AI, the U.K. is committed to charting a different course with its digital regulatory regimes, particularly compared to its counterparts in the EU.

Where the EU is pursuing standalone legislation in the form of its AI Act, U.K. DRCF Chief Executive Director Kate Jones, also speaking during the LinkedIn Live session, said British digital regulators — including the Competition and Markets Authority, Office of Communications, Financial Conduct Authority and ICO — are up to the task of regulating AI in a sector-by-sector approach. Coordination efforts of the DRCF working with each of the agencies will support that approach, she said. 

"One of the things that I think has had to be created in this sphere is a better way of linking the key regulators on digital together, and that's what the DRCF does," Kate Jones said. "So, you're seeing all of our member regulators each upskill, each gain new powers for digital markets."

A harmonized effort for AI regulation

Following the March publication of the U.K. government’s white paper "A pro-innovation approach to AI regulation," Kate Jones said the DRCF is pursuing an AI governance policy that empowers the four digital regulators under her agency’s umbrella to issue guidance for how AI can be legally used in each of their domains. Part of the DRCF’s response to the government's white paper called for the creation of an AI sandbox for developers.

"DRCF regulators have competence within their respective spheres, and they are looking at how that applies to uses of AI within their respective spheres," Kate Jones said.

The DRCF, Kate Jones said, has been "actively engaging" with the U.K. government as it formulates its overall AI strategy. She said the agency's main efforts have included developing a common "understanding and definitions of key principals of AI governance" across the four regulators, further building regulators' capabilities related to AI, and partnering with AI developers in a joint research project.

With the looming pervasiveness of AI in many aspects of life, Almond said he believes the U.K. is pursuing the correct course of action by dividing AI regulation among its existing digital regulatory agencies. He said the "principles-based" approach of the U.K.’s privacy and data protection regime lends itself to each regulator taking ownership of AI uses in its respective purview, rather than establishing a sole-AI regulator.

"Those principles mean that, actually, we are able to be remarkably agile in applying them to different and novel technological contexts," Almond said. "If we turn our minds back to some of the transformative applications of the past … you can see how they had transformative cross-economy applications, and there were people clamoring at those points in time to try and compress all of the implications of those transformative developments in technology into one single regulatory regime."

"When we're looking at something like AI, so much of what we need to think about is context-specific," he continued.

ICO outlines AI strategy

Almond said the ICO began its AI work in the mid-2010s. Part of the agency's work has entailed issuing guidance for how data protection laws and the concept of fairness are applied to AI to prevent bias and discrimination, he said. Additionally, the ICO has created its AI risk management toolkit to help governance practitioners identify any data protection risks associated with a given system.

Almond said the ICO is also focused on launching its innovation advice service, through which organizations will be able to submit questions about how data protection laws apply to various aspects of the products they are innovating. He also said organizations looking for more direct guidance from the ICO are welcome to participate in the ICO's regulatory sandbox to test their products from a data protection perspective.

Almond said the ICO's next step is to work in concert with the other agencies comprising the DRCF to issue AI-related guidance with a comprehensive, multi-agency stance.

"What we’re doing right now is building up from those foundations of understanding of where our regulatory position is, and (now asking ourselves) how do we join up (with the other regulators)," Almond said.


Approved
CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 1

Submit for CPEs

Comments

If you want to comment on this post, you need to login.