France is set to host the AI Action Summit 10 Feb., bringing together international leaders to discuss AI governance at a time of global uncertainty.

China and the U.S. will be among the nearly 100 countries attending the summit, which will also host key private-sector AI leaders like Google and OpenAI. Discussions will focus on innovation and the economic growth of AI technology, while Reuters reports potential nonbinding global AI principles could be unveiled after months of negotiations.

The discussions come in the face of a race to AI innovation that is raising growing AI governance and safety debates. Most recently, the growing popularity of China-based DeepSeek's AI chatbot is garnering the attention of global data protection authorities for the startup's handling of user data.

Among other concerns with DeepSeek are perceived surveillance risks. The U.S. recently banned government employees from using the chatbot on their devices, citing concerns over the company's potential ability to share user information with China. Australia and South Korea's governments followed suit with their own bans, as Australia cited an "unacceptable level of security risk."

While there's evidence of some common safety approaches related to national security, there remains no unified global approach to both safety and governance among nations at this time.

Parts of the EU AI Act took force 2 Feb., including the landmark legislation's provisions on prohibited AI practices. The European Commission offered complementary guidelines to help developers and users better understand how the act treats prohibited practices.

The Office of the Australian Information Commissioner aligns with EU counterparts in regard to DeepSeek and other generative AI offerings. Privacy Commissioner of Australia Carly Kind indicated on LinkedIn that her office will examine the data protection concerns around DeepSeek, but will first assess "the right tool in our toolbox" to regulate "in a proportionate and responsive way to ensure the best privacy outcomes for Australia."

Kind also pointed to the need for a "cross-jurisdictional response" to AI matters given the technology "transcends borders."

"We support and watch on with deep interest in the regulatory initiatives launched by our colleagues in Italy, Belgium, South Korea and elsewhere, and are in regular communication with our peers on this and other issues," she added.

While it deals with national security concerns stemming from DeepSeek, the Trump administration is charting a path of AI innovation over safe development and use. In recent public remarks, Federal Trade Commission Commissioner Melissa Holyoak said the country has "a strong national security and economic interest in being first in AI technology," a markedly different tune from the agency than under its previous chair, Lina Kahn.

Holyoak said the FTC's AI approach will move past the regulatory actions of former President Joe Biden, which included the now-rescinded executive order on safe and trustworthy AI. Rapid AI development is the agency's new goal, according to Holyoak, adding the FTC should be "looking for ways to promote dynamic competition and innovation in AI, not hamper it by pursuing unclear regulations or misguided enforcement actions.”

Former FTC Chair Lina Khan claimed in an op-ed for The New York Times that the FTC's previous enforcement aimed to bring in smaller companies to advance AI development while reducing unfair competition. While the U.S. is positioning itself as a top leader in AI, Khan said the best way for the U.S. to "stay ahead globally is by promoting competition at home."

Khan indicated the growth of DeepSeek and the subsequent U.S. response show a lack of competition in the U.S. AI marketplace, potentially making the country vulnerable to its Chinese counterparts.

It is not yet fully clear how industry will respond, but Google made waves with a recent policy update to remove principles for limiting the harmful use of its AI technologies, including weapons and surveillance. Google's Head of AI Demis Hassabis and Senior Vice President for Technology and Society James Manyika said in a blog post the changes are due to the expansion of the technology and the "complex geopolitical landscape."

Hassabis and Manyika said the company believes governments and organizations "should work together to create AI that protects people, promotes global growth, and supports national security."

Lexie White is a staff writer for the IAPP.