Government officials from the EU, Singapore and the U.S. made clear at the World Economic Forum's 2024 annual meeting that there is agreement on the need to have international alignment around how artificial intelligence is developed and used. How those regulations develop, however, is still an open question as countries look to balance innovation and risks.
The discussion was one of many on the WEF agenda about the risks, responsibilities and possibilities around how AI is affecting society. The converging and diverging views among panelists exemplified the challenges governments and regulators face as they try to manage an industry with a largely unwritten future.
"People decide what kind of autonomy and agency to give these systems. People decide what applications to use them for," said Arati Prabhakar, director of the White House Office of Science and Technology Policy. "These are the most human technologies you can possibly imagine — if we're gonna get governance right, we have to keep our eyes on the people not just subject."
Setting a standard
With some countries taking proactive steps to build regulation, panelists wrestled with how far international agreement on AI standards could go in such a competitive market.
There is an opportunity for the EU AI Act, a major proposal approaching finalization, to set the tone for regulation much as the EU General Data Protection Regulation did for privacy in 2015, said Vera Jourová, Vice-President for Values and Transparency for the European Commission. That is not where the work ended, she said.
"We were not just passively waiting in Brussels, waiting for the others to copy and paste … we were in frequent dialogue with the United States, with many others, explaining what we have with the GDPR and what might follow," Jourová said.
"I think there's promising space for international cooperation," she added, pointing to efforts such as the G7 code of conduct on AI and a joint partnership with UNESCO.
Prabhakar said AI technology does not "stop at the borders" and that policy discussions in China, the EU, and U.S. and beyond could all have implications. But while Prabhakar said there may be international accord on some subjects, economic competition means there will inherently be differences.
"I think we just need to be clear that we will all compete economically, there will be geopolitical and strategic competition that's based on AI," she said. "Both of those will happen at the same time," she said.
Jourová and Prabhakar's comments come as their governments are in disagreement on how to address the private sector. The U.S. seeks to remove the private sector from what could be the Council of Europe's international treaty on AI, according to Euractiv. An observer country in CoE discussions, the U.S. says companies should be exempt from the treaty unless their countries decide to include the private sector — something the European Commission is against.
Those decisions could decide where technology companies choose to invest, giving them what some fear is an outsized level of influence. United Nations Secretary General António Guterres recently warned, "Powerful tech companies are already pursuing profits with a reckless disregard for human rights, personal privacy, and social impact."
For countries like Singapore, the conversation is a little different, according to Singapore Minister for Communications and Information Josephine Teo. It has had a framework on AI since 2019 and released a draft update in January that focuses more on the accountability and content provenance elements of generative AI.
The country can independently focus on building up its AI landscape — building infrastructure to support the development and use of the technology, as well as creating the workforce to use it — alongside the greater conversation around regulation.
However, Teo added there will be many attempts to hash out what the AI risks are and what are the best ways to deal with them. When that convergence happens, smaller countries like Singapore will be able to use the blueprint built by others to inform their own regulations.
"We can't have rules that we made for AI developers and deploy those in Singapore only, because they do cross borders," she said. "It makes no sense, you know, for us to say, 'This set of rules applies here. If you're coming you must comply only with our rules.' There have to be international rules."
The industry factor
Companies have a role to play in crafting regulations, whether through simple consultation with policymakers or shaping policy through industry best practices. Microsoft, which joined the government officials for the WEF panel, has pursued both paths.
Microsoft gained significant traction in the AI field after investing heavily in OpenAI's ChatGPT and was among a group of companies tapped to advise the creation of the White House AI executive order, which largely served as a directive for U.S. government entities to begin creating guidance. The company's submissions to the executive order's stakeholder consultation came after Microsoft joined other Big Tech companies on a voluntary AI pledge set out by the Biden administration in July 2023.
Microsoft Vice Chair and President Brad Smith said during the WEF panel that regulators will likely diverge in some ways, but he argued early efforts like the U.S. executive order, the EU AI Act and the creation of the United Nations AI advisory board show promise for a solid framework regardless of an international standard.
"We have to recognize people actually care about a lot of the same things and even have some similar approaches to addressing them," he said.
Prabhakar added it would have been bad regulatory practice to just "sit in our offices and make up the answers" when crafting the AI executive order and noted civil servants, workers and smaller businesses were also consulted.
Still, "one thing that we have been completely clear about is the competition is part of how this technology is going to thrive. It's how we're going to solve the problems that we have ahead," Prabhakar said.
Teo focused more on whether there might be a spectrum of regulation going forward. There are some uses, like deepfakes, that she said demonstrate a clear harm to society and the individual, and deserve strong regulation.
Otherwise, regulators might be better served by voluntary guidelines and observing the market to see what develops before hard regulation, she said. She suggested benchmarks on responsible uses of AI should fall into the realm of a lighter regulatory touch for now.
"These kinds of things are still very nascent. No one has answers just yet that are very clear, very demonstrable," Teo said.