The EU Artificial Intelligence Act’s complicated governance framework means oversight officials will have to work hard to ensure the regulation’s success, the senior leader of the European Data Protection Supervisor said during a breakout session at the IAPP AI Governance Global 2024 in Brussels, Belgium.
Secretary General Leonardo Cervera-Navas opined the governance aspect of the regulation was one of its weakest points because of a desire to be inclusive and allow member states to have a role in regulating AI. While he indicated the creation of the AI Office will allow the European Commission to speak with a singular voice to the body, cooperation with stakeholders at all levels is needed within and outside the EU as the AI Act’s enforcement begins.
"Governance is a big issue," he said. "And that's why we really need to be very, very ambitious in terms of cooperation. So here we have to go beyond nice words and intentions; we really need to set up clear mechanisms and governance structures."
Cervera-Navas' remarks came during a discussion with U.K. and U.S. regulators about the international landscape of AI regulation, a field which so far has only see one major framework pass. The act is likely to be published in the Official Journal of the EU sometime this summer, setting in motion an enforcement timeline beginning six months from publication. Speakers said cooperation at the domestic and international level will be critical as the world decides who should take the lead on AI governance and how that work will interact with other data governance principles.
The EDPS is doing its part to prepare EU institutions for AI governance best practices. The office released guidance Monday on privacy considerations for institutions around the development and use of generative AI. In a statement, EDPS Wojciech Wiewiórowski described the guidelines as "a first step towards more extensive recommendations in response to the evolving landscape."
Who does governance work at the national level was left up to EU member states, and Cervera-Navas said he anticipates some states will lean on data protection authorities because of their existing knowledge. But he argued the work of data protection and AI governance should be considered separate duties.
"Data protection authorities are kind of constitutional bodies because we are upholding fundamental rights," he said. Whereas, "Our role as AI authorities is more similar to market surveillance. … You have to talk to them in the language they speak."
In the U.S., the conversation on AI has largely worked through the lens of the White House's executive order on AI, which set in motion guidance frameworks from multiple agencies and a hiring spree of AI officers within the government. U.S. AI Safety Institute Director Elizabeth Kelly told AIGG attendees she considers the management of that plan in buckets of innovation, gathering information on large language models and how AI is applied within the government.
While many agencies are involved in that work, Kelly said it is critical to tailor involvement to those with the specific skills needed in AI governance.
"We have existing authorities, and we need to make sure that they are fit for the purpose of regulating this new technology," she said.
Part of the governance challenge is many people are looking at governance either through an ethics and human rights or economic lens, said Kate Jones, the CEO of the Digital Regulation Cooperation Forum. This means "the groundwork still isn’t settled" around governance despite more than 50 countries putting forward national strategies, she said.
"It's absolutely essential to bring those different perspectives together," she said. "None of them can have a monopoly on AI governance, and it's really important that they work together."
But Jones argued one area where there is still a disconnect around the need to understand the role of privacy in AI governance in some industries.
"I can go to a FinTech discussion and people there have relatively little awareness of data protection discussions and perhaps no awareness of these kinds of discussions that are happening," she said. "I think that what companies need to do is to make sure that they are joining up the dots internally, because I'm not sure yet that is necessarily happening."
Kelly said there is also a divergence in how much money is being spent on AI development versus its safety. She said she was "heartened" by strides companies are making to improve safety, but that divide illustrates the need to have global cooperation among regulators and to share resources, "So we're not trying to duplicate or compete, but can be working together to really advance AI safety, because keeping up with the innovation is going to be a huge challenge for any government bodies."
Part of the anxiety facing regulators is ensuring companies take the data protection elements of the AI Act seriously rather than treating it as "window dressing," as some did with the EU General Data Protection Regulation, Cevera-Navas said.
Microsoft Chief Privacy Officer Julie Brill, the moderator of the discussion, said it is then up to regulators to make that message clear, because data protection is going to be a "hundredfold" more times important with AI.
"You really need to make sure that that data access controls are really strong, that your security controls are really strong," she said.