More than two years ago, Italy's data protection authority, the Garante, hit Clearview AI with a 20 million-euro fine and demanded it delete all the personal data it had collected in Italy. But the fine was never collected and the data remains undeleted because of a lack of international agreement on enforcement, according to one of Italy's top data regulators.

Guido Scorza, a commissioner at the Garante, told attendees of the IAPP Global Privacy Summit 2025 that it was like the enforcement had never happened at all, because there is no international convention to ensure an Italian action is carried out on a U.S. company. Clearview, a facial recognition company which amassed a large database of people's faces off the internet and sold it to law enforcement, "disappeared" and never responded to the Garante's notifications of the decision, Scorza said. The Garante determined its practices violated the EU General Data Protection Regulation.

The incident illuminates a key challenge facing data regulators, already tasked with enforcing privacy protections and increasingly at the center of AI governance efforts. While many countries have either adopted or are looking at regulating AI along some common risk determinations and there is some agreement on interoperable technical standards, the differences in local attitudes and legal landscapes can create barriers to ensuring those interactions have teeth, three regulators said.

"Here we are in front of a global phenomenon, and then we find in a very short amount of time, some global regulatory tools at least for the basis and the notification of the decision, are in my view completely useless," Scorza said. "Because, yes, we can adopt any kind of decision, but after that, we are not in the condition to enforce it."

Italy ran into a similar problem when it investigated DeepSeek, the Chinese generative artificial intelligence chatbot which notes its data is stored in China in its privacy notice. The Garante blocked it from operating in Italy after an initial investigation three months ago. Scorza said there was not much the agency could do to bring enforcement after the company said it does not have to adhere to the GDPR.

Enforcement actions can face lengthy delays as businesses challenge decisions. The Irish Independent found in 2024 only a small fraction of the fines Ireland's Data Protection Commission had levied over the last five years had been collected.

Cross-border enforcement also has to take into consideration the local attitudes and regulatory landscape of different areas, said Iagê Zendron Miola, the director of Brazil's data protection authority, the Autoridade Nacional de Proteção de Dados. Finding common principles can be difficult if one area takes a decidedly different tactic toward AI than another.

"We have to think about systems that interact and not to create the burden that makes it impossible for this technology that already highly operates on a transnational level and actually take places on this transnational level," he said.

Those challenges may stem from countries outside the EU, which adopted its comprehensive AI law last year, still working out their own strategies. Brazil is currently in the process of charting its own AI regulatory path, having put in place a national digital plan and putting investments into the technology's infrastructure and development.

The country's version of the GDPR, the Lei Geral de Proteção de Dados, does not mention AI explicitly, but its data protection provisions and ability to grant a review of data used in making an automated decision crop up in other laws, Miola said. He noted a bill making its way through the Brazilian Legislature would also take a risk-based approach similar to the EU AI Act, although the exceptions to what is considered high-risk and the national governance style is different.

Those areas of synergy provide guideposts for cross-border regulatory agreement, Miola said. AI regulators can also look to antitrust and data protection collaborations for guidance.

"So there are risk-based approaches, the prohibition of certain unacceptable risks, stricter requirements for high risk applications. There's a focus on transparency and algorithmic impact assessment," Miola said. "So, I think there is common ground in these common features."

For smaller nations, it may not be practical to put in place bespoke regulations which clash with larger bloc's approaches. But they may be able to shape wider policy by building coalitions and shaping international best practices, said Jamin Tan, Singapore's Infocomm Media Development Authority's director of ecosystem development and engagement.

"As a small nation, we don't have the privilege of dictating, having a take it or leave it approach to global players," he said. "We have to understand what the players and the norms are and see how to help steer that in a constructive way."

While it's not practical to expect complete alignment, Tan said work done by the Organisation for Economic Co-operation and Development's AI expert working group and the UN's High-level Advisory Body on AI — both of which Singapore has a presence — helps create international benchmarks others can look to.

"We've seen a common vocabulary emerge around fairness, robustness, explainability, interpretability, transparency, accountability and data security and so on," he said.

Singapore also works closely with AI safety institutes and regional partners, such as the Association of Southeast Asian Nations and the Forum of Small States to drive policy. It has run joint exercises with Japan testing large language models to influence how model testing is conducted.

"We really spend a lot of time grappling with how to understand and interact with the global forces shaping AI development given the implications for society as a whole," Tan said. "AI is too important a field for a regulatory race to the bottom."

Caitlin Andrews is a staff writer for the IAPP.