Despite the passage of a major artificial intelligence framework in the EU, the head of Meta’s global affairs warned there is a risk of fragmentation in AI policies unless global entities can find a regulatory through line.
Nick Clegg told Center for Democracy & Technology President and CEO Alexandra Reeve Givens during the closing general session of the IAPP AI Governance Global 2024 that there is a tendency among global regulators to see the passage of a law as a measure of success alone, referring to the celebration from some stakeholders of the passage of the EU AI Act.
But Clegg said the real test of the act's success is yet to come.
"The test of success is what is the content of it? And is it applied in a way that is consistent with the reality, rather than the sort of sometimes misperception, of the technology? Can it be applied evenly across the world?" he said.
The AI Act is heralded as the first major AI policy to put in place definitive bans around some uses and guardrails around high-risk uses. Many wonder if it will become the standard setter around AI, as the EU General Data Protection Regulation was for privacy and data protection. Clegg's comments highlight the concerns among some of the world’' biggest AI developers, who continue to develop their technologies at a breakneck pace while regulators haggle over regulations.
Clegg, a former U.K. deputy prime minister, said he is encouraged to see regulators taking a quicker response to AI than other industries such as social media, noting there can be a time lag between when a technology is developed and when politicians begin governing it. But he also indicated politicians are under pressure to respond to fears and anxieties, something he observed last year as commentators rushed to ascribe intentions to algorithms.
"These systems are not human," Clegg said. "They are highly synthetic, sophisticated, versatile, data-hungry, pattern-recognition prediction systems. They don't know anything about the world."
Matt Brittin, president of Google’s EMEA business and operations, took a different tact on the state of AI regulatory efforts. He said, while Google supports regulation, it is not going to wait for policies to fall into place as it develops its products. He said there are many near and long-term risks associated with AI, but the greatest is "missing the moment."
"We are still in the very early stages of AI and it brings a whole array of challenges, a whole array of risks. But it is also, I believe, the opportunity of the century, and that's why it's so important to come together around this," he said.
It is still too early to understand AI's total capacity to change the world, according to Brittin. He recalled a conversation with an Italian data regulator who compared a fork to technology, noting it could be used to eat spaghetti — or stab someone in the hand.
"And I think the point was, we don't ban forks, but we have consequences for their misuse," Brittin said.
Brittin added it is important for Google to put its policies out there and share their vision to help better understand AI. Google recently published a policy paper on regarding privacy in generative AI models, touching on how to embed protections in model training and grounding. He argued generative AI trained correctly can protect privacy by helping to simplify compliance.
"We don’t think that privacy and AI need to be intentional, they can actually be in partnership if we get this right," Brittin said.
In contrast, EU AI Act co-rapportuer Dragoș Tudorache said the landmark AI Act came at the right time as AI technology was gaining mainstream application. Those rules and standards do not hamper innovation, he said, but give industry a guideline to work around and a sense of direction.
Now it is time regulators and deployers alike ensure the AI Act works among other governance frameworks.
"It is a joint responsibility that we now have to make sure this model works, to prove to everyone that this (is the) form of governance for this technology," Tudorache said. "The institutions that we’re creating, the standards we’re going to develop, can deliver those objectives we had at the very beginning: To protect society and encourage innovation."