The race to dominate the data and infrastructure that powers artificial intelligence is paired with a race to create law and policy that facilitates control over this technological revolution. The passage of the EU AI Act in 2024 signaled a high-water mark in comprehensive legislation governing AI, though a more recent trend has been to temper regulatory limits on the technology in the name of competition and innovation. As the updated Global AI Law and Policy Tracker demonstrates, many nations continue to debate impactful policies, testing new governance models as the risks and rewards of AI investments are revealed daily.

A steady stream of legislation

While the EU considers putting a pause on the implementation of part of its AI Act, other countries are progressing efforts to pass new AI legislation. For example, South Korea finalized its AI Framework Act in January 2025, which strengthens transparency and safety requirements and offers various promotional measures, such as support for research and development and for AI adoption and workforce preparation. Likewise, Japan enacted the AI Promotion Act in May 2025, a light touch regulation that encourages companies to cooperate with government safety measures and empowers the government to publicly disclose the names of companies that use AI to violate human rights. Furthermore, China promulgated its AI Labeling Rules, which generally require service providers, as defined by China's existing suite of AI regulations, to add both explicit and implicit labels to AI-generated content.

The list of draft AI legislation is even more formidable. It includes Argentina's Bill on Personal Data Protection in AI Systems, which seeks to regulate the use of personal data used to develop AI systems beyond the scope of Argentina's existing data protection law, and India's proposed Digital India Act, which aims to update India's regulatory regime vis a vis cyberspace and provide provisions that govern AI-generated content. Proposals also include Brazil's Bill no. 2338/2023, which would create a risk-based framework that imposes risk assessments and appeal mechanisms on high-risk systems, as well as Vietnam's Draft Law on AI, which emphasizes human-centrism, risk-based management, and regulatory distinctions depending on an entity's place in the AI supply chain. 

ADVERTISEMENT

Radarfirst- Looking for clarity and confidence in every decision? You found it.

There are also new laws that indirectly impact AI development. Australia made amendments to its Privacy Act, which regulates disclosures around automated decision making, and the U.K.'s passage of the Data (Use and Access) Act amends the U.K. General Data Protection Regulation in ways intended to promote innovation and economic growth, such as clarifying when personal data can be used for scientific research and liberalizing the lawful bases available for automated decision-making.  

The rise of AI hubs

While nations erect regulatory guardrails, many are simultaneously extending policy that attracts investment in AI development and infrastructure. Chile is one such illustration, ranking first in one Latin American AI Index as it expands construction of data centers, lays additional subsea fiber-optic cables, and promotes local AI startups. On the opposite side of the continent, Brazil plans to invest USD4 billion in AI business projects, infrastructure, training initiatives, public service improvements and regulatory frameworks.  

Many of the Gulf states are likewise intent on becoming AI hubs. For example, the United Arab Emirates hosts a growing startup and research community, as well as state-of-the-art supercomputing resources, including Stargate UAE, a partnership between the Emirates, the U.S. and OpenAI to build frontier-scale compute capacity in the UAE and enable AI tools across critical sectors. Additionally, Saudi Arabia intends to leverage its young and vibrant population and centralized governance ecosystem, hosting events and attracting investment to become a leading exporter of data and AI by 2030.

South Korea also seeks to be a center of AI innovation. The nation has launched the AI Open Innovation Hub, a national AI development support platform, and plans to build the world's highest-capacity AI data center. 

Deregulatory signals

Of course, deregulatory signals are also on the horizon. Top of mind is talk that the EU may postpone implementation of the AI Act, encapsulated in a Digital Omnibus on AI Regulation Proposal released by the European Commission in mid-November 2025. The proposal noted several implementation challenges, including delays in designating competent authorities, as well as a lack of harmonized standards for high-risk AI requirements and necessary guidance tools. Several changes to the act were proposed, including delayed entry into application of provisions governing high-risk AI systems to align enforcement with the availability of compliance tools. Other amendments include reduced documentation requirements for small to medium-sized enterprises and reinforced oversight powers for the AI Office regarding general-purpose AI models. While this proposal is just that, it does foreshadow future negotiations over the AI Act's rollout and regulatory bite. 

Furthermore, Australia's Productivity Commission issued a report that, in part, hedges against the over-regulation of AI. The report notes the chilling effect that burdensome regulation may have on investment and highlights the importance of pursuing regulatory goals at the lowest possible cost to innovation. Similarly, Canada's Competition Bureau published a report indicating that regulation specific to the AI sector can hinder innovation, impose burdens on growth and create barriers to entry for startups. 

The U.S. has also been at the forefront of the deregulatory trend, with one executive order from President Donald Trump that seeks to remove barriers to AI development and stimulate innovation, and another that works to unleash prosperity through deregulation. These executive orders foretold the administration's AI Action Plan, which aims to accelerate innovation and build infrastructure by, for example, removing red tape and onerous regulation. Moreover, a December 2025 executive order outlines further efforts by the White House to limit U.S. state AI laws. 

Governance by standards

Where binding legislation is lacking, operational and technical standards continue to fill the void, not to mention, avoid the pitfalls of traditional lawmaking. For example, Canada's government established the AI and Data Standardization Collaborative to develop standards based on tested, multistakeholder needs and ensure consistency across domestic and international frameworks. The collaborative advances the idea that standards make products and services safer while also fostering innovation. 

The Australian Department of Industry, Science and Resources released the Voluntary AI Safety Standard, which comprises 10 guardrails for developing safe and responsible AI, including testing, transparency and accountability requirements. Balancing risk and reward, the standard aims to ensure reliable AI in high-risk settings as well as enable the flourishing of AI in low-risk settings.

Other nations working in this space include China, whose Standardization Administration released three standards to improve generative AI security, and India, whose Ministry of Electronics and Information Technology is working with industry stakeholders to develop various standards for such metrics as reliability, explainability and privacy. Likewise, Kenya's Bureau of Standards released a Draft IT AI Code of Practice to provide guidance for AI applications based on a common framework, and the U.K.'s AI Standards Hub is an initiative dedicated to global AI standardization. 

Questions over copyright

Use of copyrighted data to train AI systems remains a contentious legal issue. Recent developments on this front include a public consultation process initiated by Hong Kong's Commerce and Economic Development Bureau and Intellectual Property Department to assess potential updates to copyright laws to create an exception for computational data analysis and processing. Along these lines, the U.S District Court for the Northern District of California found that training an AI model on copyrighted works likely qualifies as fair use. However, the court also found that storage of the same works in a central library only constitutes fair use if those works were legally obtained. 

Less recently but nonetheless relevant, Japan amended its Copyright Act, which now permits the use of copyrighted works for AI development and training purposes, so long as the use is not intended to replicate the work's expressive content. Additionally, Israel's Ministry of Justice issued an opinion holding the use of copyrighted material is permitted for machine learning purposes. 

Ongoing international cooperation

Amid all this policymaking, international cooperation over AI governance continues. Singapore stands out as a leader in diplomacy, unveiling an initiative with the U.S. to create interoperability between the two nations' governance frameworks, and signing agreements with Australia and the EU AI Office to cooperate on AI safety and innovation. Similarly, Brazil's data protection authority, the Autoridade Nacional de Proteção de Dados, met with France's DPA, the Commission nationale de l'informatique et des libertés, to strengthen cooperation on AI, data protection and digital education; in addition, Brazil and Nigeria signed a memorandum to strengthen their collaboration on AI development and technology transfer. Furthermore, Canada joined several other G20 nations in drafting a set of principles to guide adoption of AI in the telecommunications industry, focusing on growth, security and societal benefits. Finally, the U.K. and Qatar committed to increased collaboration on AI research. 

Conclusion

Even as deregulatory trends arise, law and policy governing AI nonetheless continues to proliferate, if only in new and creative expressions. Some jurisdictions continue to debate comprehensive laws, others consider the efficacy of standards, while most remain convinced that international collaboration and diplomacy is integral to the successful governance of this disruptive technology. Follow the IAPP's Global AI Law and Policy Tracker to stay up to date with these evolving policy positions.

Will Simpson, AIGP, CIPP/US, is a Westin Fellow for the IAPP.