Discussions on the need to establish a governance framework for artificial intelligence took off following the public release of ChatGPT last November, which showed the world the impressive pace large language models, and generative AI in particular, are progressing.
The breakneck speed of AI development prompted some business leaders and technologies to call for a six-month hold on the release of powerful models. However, many have written off the initiative as an attempt for Elon Musk to play catch up.
Another statement undersigned by AI luminaries like OpenAI CEO Sam Altman and Google DeepMind CEO Demis Hassabis went as far as warning that AI could lead to human extinction at the same level as pandemics and nuclear war.
These initiatives were welcomed somewhat skeptically, not only because they are undersigned by the same people in the driving seat of this technological revolution, but also because they tend to move the attention to the long run when AI can already cause severe harm.
Still, many initiatives for managing this incredibly complex technology have begun to flourish. As a result, there is a new competition running in parallel to the AI arms race: the race to AI governance.
A European perspective
At the legislative level, the EU is ahead of the curve. The AI Act, the world's first comprehensive AI law initially proposed in 2021, is now entering its final phase, with a consolidated text expected by the end of the year.
"The most important part of the AI Act is what is not in there," said Andrea Renda, Senior Research Fellow and Head of the CEPS Unit on Global Governance, Regulation, Innovation and the Digital Economy. Renda was one of the experts who contributed the most to shaping the legislation behind the scenes.
At the heart of the EU rulebook are two categories: AI applications that pose an unacceptable risk, which are outright banned, and those that entail significant risk for people, which must comply with a strict regime.
"These categories are written in the sand. The technology is moving too fast," Renda added, stressing the key challenge is to set up a governance system that can update the list of unacceptable and high risk over time in a transparent, multistakeholder and expert-driven manner.
However, it might take a couple of years before the AI Act is in place, although the provisions on foundation models or generative AI systems might be anticipated.
That is why the European Commission launched an AI Pact to work with AI developers on how to comply with the upcoming regulation for those who want to anticipate some of its requirements.
Regulators in action
At the same time, European regulators have started positioning themselves to become the AI rulebook enforcer, with data protection authorities particularly active.
Italy's data protection authority, the Garante, was the first to temporarily suspend ChatGPT due to some data protection violations, with the service reinstated a few weeks after OpenAI deployed some improvements.
Italy's forging ahead prompted the European Data Protection Board to set up a task force on ChatGPT. However, the collaboration has proved slow, and authorities are now waiting for OpenAI to clarify how its system works.
France's DPA, the Commission nationale de l'informatique et des libertés, also took the lead, publishing an action plan on generative AI in May that includes how to develop these AI models while protecting personal data. The CNIL is also dialoguing with technology companies like Hugging Face to support their compliance with the EU General Data Protection Regulation.
"We need clear legally binding rules to govern these systems. And we need regulators that are willing to enforce those rules. Law on the books often differs from the law in practice. The best laws in the world are useless if we don't enforce them," said Sandra Wachter, a professor of technology and regulation at the Oxford Internet Institute.
"To be successful, we need good enforcement both on a local and global level," Wachter added, stressing that "technology does not care about geography, and so it will be very important that we work on international collaboration to offer seamless protection of fundamental rights."
International collaboration on this sensitive topic is already on its way, at least as far as privacy regulators are concerned. Just last week, DPAs from the G7 countries gathered in Tokyo to set out a shared vision of the crucial privacy concerns related to generative AI.
International initiatives
A plethora of AI governance initiatives is flourishing at the international level. Some of these might provide a vehicle for the EU to establish the AI Act as the global benchmark, first and foremost with the Council of Europe convention on artificial intelligence, human rights, democracy and the rule of law.
The first international treaty on AI could become binding for over 50 countries if all the participants and observers decide to sign it. Still, a push from the U.S. to exclude private companies from the convention might severely limit its impact.
This outcome would be somewhat ironic since Brussels invested significantly in building a joint AI roadmap as part of the EU-US Trade and Technology Council to develop a joint terminology of key concepts and risk management methodologies.
Precisely at the last trans-Atlantic summit in Sweden at the end of May, European Commissioner for Competition Margrethe Vestager launched the initiative of a voluntary code of conduct for generative AI to build international alignment with like-minded countries.
The code of conduct is meant to build on the so-called "Hiroshima Process," based on a joint declaration announced by G7 leaders following their April summit. It calls for a common approach to AI governance, copyright protection, transparency, addressing disinformation and promoting responsible use of this technology.
"From a global standpoint where very few countries have regulations under preparation, the Hiroshima process could have great value to establish collaborations on governance mechanisms," said Kris Shrishak, a technology fellow at the Irish Council for Civil Liberties.
The U.K. is also trying to play its part, with Prime Minister Rishi Sunak floating the idea of creating a CERN for AI to conduct international research projects and a global AI regulatory body on the footprint of the International Atomic Energy Agency.
Vestager cherishes that many conversations are happening on AI in parallel. "AI of this category is something no one can monopolise," she said. "We are discussing things ranging from new recommendation systems to human extinction. It's quite a broad set of risks."
The Chinese question
Not yet invited to the table: Beijing.
In April, China's internet regulator released a proposal for regulating generative AI systems, which includes consumer protection provisions such as transparency requirements and prohibition of extensive profiling, while also introducing elements of state control.
According to Matt O'Shaughnessy, visiting fellow in the Technology and International Affairs Program at theWashinton-based think tank the Carnegie Endowment for International Peace, the Chinese legislation suggests guardrails preventing misuses that undermine public trust in AI might benefit long-term strategic competitiveness.
At a conference in Beijing earlier this month, Open AI CEO Sam Altman said "with the emergence of the increasingly powerful AI systems, the stakes for global cooperation have never been higher," stressing that China should play a key role in shaping the safety rules for AI.
Still, when asked whether she saw China joining the AI Code of Conduct, Vestager said she would be happy if that was eventually the case but "would be happier if the drafting happened in a context where we share the same values."