As 2024 comes to a close, the global landscape for artificial intelligence policy and regulation has seen remarkable advances since the Global AI Law and Policy Tracker was last updated in February 2024. Since the last update, the EU has passed landmark comprehensive AI legislation in the form of the AI Act and the U.S. has seen state regulations bloom alongside targeted executive action.
Elsewhere, Australia, the U.K. and several South American countries have continued to forge their own paths toward developing laws and policies on AI governance. Several multilateral agreements have come about, including the Council of Europe's Framework Convention on AI. The AI Summit in Seoul resulted in agreements for cooperation among AI Safety Institutes, although China has broken off from the group originally convened at Bletchley Park while remaining committed to multilateral cooperation.
On 1 Aug., the Top 10 Operational Impacts of the EU AI Act and
The AI Act also spurred the creation of two different entities, the AI Office and the AI Board, both of which will be involved with implementing the new law. The increased regulatory burden from the AI Act will be felt by any organization doing business in the EU, and compliance measures range from giving notice when interacting with an AI-based system to outright bans on certain systems.
In the U.S., Colorado, Utah and California have passed comprehensive or cross-sectoral legislation on the use of AI and similar bills are in committee in several other states. Additionally, Illinois passed legislation specific to the use of AI in employment and recruiting. The IAPP is tracking these developments via the roadmap for AI. Additionally, the U.S. has provided private sectors with an example of how AI might be regulated by detailing the federal government's restrictions on its AI use through the Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI as well as through guidance from the National Institute for Standards and Technology and the Office of Personnel and Management.
While the U.K. did not pass any major AI legislation before the abrupt change in government leadership, the new administration affirmed its intentions to regulate AI in the King's Speech, while not proposing concrete legislation. In Australia, the government has taken a different approach to governing AI; the Department of Industry, Science and Resources released the Voluntary AI Safety Standard, which buil on previous efforts to support and promote consistency among best practices when developing AI. While not mandatory, the Australian standard consists of 10 guardrails, including testing, transparency and accountability requirements.
Several South American countries are advancing their efforts to govern and encourage AI domestically. Argentina's congress started debating new AI legislation, which could take a risk-based approach similar to the AI Act. Likewise, Chile introduced similar draft AI legislation.
Brazil introduced a comprehensive AI Bill to its Senate, which is explained in greater detail in the tracker, along with the Argentine and Chilean efforts. Additionally, Brazil committed to investing BRL4 billion in domestic AI capabilities through its AI investment plan. While Europe and North America have received the bulk of the attention regarding AI governance, South America should be on most peoples' radar, as many of the countries in that region are committed to embracing and regulating AI.
The newest addition to the global AI policy tracker is Nigeria, which has been developing a national AI strategy since April 2024. In August 2024, Nigeria released a draft of the national AI strategy, which lays out its plan to embrace AI while working to ensure it is implemented ethically. This new section details relevant information to AI governance and related fields, such as the relevant authorities and laws and policies. Nigeria has long been active in multilateral agreements on the use of AI, having adopted UNESCO's Recommendation on the Ethics of AI and participated in the 2023 U.K. AI Summit.
Australia is continuing to provide guidance in lieu of regulation with its AI Impact Navigator, which should help organizations assess and report on the impact of their proposed AI systems. The EU AI Office started drafting the General-Purpose AI Code of Practice, with the first workshop held 23 Oct. 2024. Participants, including general-purpose AI providers, discussed systemic risk assessments, technical mitigation and governance, as well as transparency and copyright-related rules. The final version of the code will be published April 2025 and should support the implementation of the AI Act's provisions for general-purpose AI.
Back in the U.S., Sen. Ed Markey, D-Mass., introduced the AI Civil Rights Act. The bill aims to "put strict guardrails on companies' use of algorithms for consequential decisions, ensure algorithms are tested before and after deployment, help eliminate and prevent bias, and renew Americans' faith in the accuracy and fairness of complex algorithms." The U.S. and the Association of Southeast Asian Nations have released a joint statement on promoting safe, secure and trustworthy AI, marking another agreement on the future direction of AI governance.
Standing out among multilateral initiatives that concluded in 2024, the Council of Europe has received early commitments from several nations to uphold its Framework Convention on AI and human rights, democracy and rule of law. This marks another multilateral agreement that many of the larger economies, including the U.K., EU and U.S., have agreed to participate in, alongside existing multilateral agreements such as the Organisation for Economic Co-operation and Development's AI Principles.
The convention lays out a standard for participating countries to ensure the use of AI does not interfere with fundamental freedoms and the enjoyment of human rights. Additionally, several governance requirements are included, such as transparency and notice requirements, impact assessments, and the ability for individuals to lodge complaints to relevant authorities.
Coming up, there will be a continuation of AI summits, which originated at Bletchley Park in 2023 and spurred the creation of and cooperation between national AI Safety Institutes. On 20-21 Nov., the International Network of AI Safety Institutes will meet for the first time in San Francisco. Members attending include Australia, Canada, the EU, France, Japan, Kenya, South Korea, Singapore, the U.K. and the U.S. The goal of the meeting is to begin technical collaboration before the February 2025 AI Action Summit in Paris.
The pace of AI laws and policies picked up in 2024, and that trend is likely to continue in 2025. This was seen in the EU and will likely continue in other regions that have begun this process. Meanwhile, multilateral agreements help to shape and standardize these efforts, which provide organizations a good baseline of what compliance requirements will be in many jurisdictions in the coming years.
Richard Sentinella is the AI governance research fellow at the IAPP.