Resource Center / Tools and Trackers / Global AI Law and Policy Tracker
Global AI Law and Policy Tracker
This tracker identifies AI legislative and policy developments in a subset of jurisdictions.
Last updated: May 2025
Navigate Tracker
Countries worldwide are designing and implementing AI governance legislation and policies commensurate to the velocity and variety of proliferating AI-powered technologies. Efforts include the development of comprehensive legislation, focused legislation for specific use cases, national AI strategies or policies, and voluntary guidelines and standards. There is no standard approach toward bringing AI under state regulation, however, common patterns toward reaching the goal of AI regulation can be observed. Given the transformative nature of AI technology, the challenge for jurisdictions is to find a balance between innovation and regulation of risks. Therefore, governance of AI often, if not always, begins with a jurisdiction rolling out a national strategy or ethics policy instead of legislating from the get-go.
This pattern is evident throughout this tracker, which is compiled into a chart, map and directory with information specific to covered jurisdictions. In June 2025, the IAPP published this article providing insight on the latest global AI law and policy trends.
This resource updates with valuable input from the global community of AI governance professionals, and we continue to welcome feedback and insights from this community. If you are aware of relevant information absent from the tracker, please share it with us at research@iapp.org.
The IAPP additionally hosts an Artificial Intelligence topic page, which regularly updates with the latest AI news and resources.
Global AI law and policy chart

This chart tracks AI legislation and policy, authorities, laws and policies in parallel professions, and adds commentary on the wider AI context in covered jurisdictions. Such initiatives are either already being deliberated at the country level or are in the process of commencing deliberations in countries across six continents, speaking to the global importance of AI. However, given the rapid and widespread policymaking in this space, the tracker does not include all AI initiatives within every jurisdiction across every continent.
A web-based directory version of the chart is available here.
Global AI law and policy map
This map shows which jurisdictions are in focus and covered by this tracker. It does not represent the extent to which jurisdictions around the world are active on AI governance legislation.
Global AI law and policy directory
This directory tracks AI legislation and policy, authorities, laws and policies in parallel professions, and adds commentary on the wider AI context in covered jurisdictions. Information in this directory is available as a chart here.
-
expand_more
Argentina
-
expand_more
Specific AI governance law or policy
Argentina has made policy initiatives on AI. It has developed a draft of a National AI Plan to help facilitate the use and development of AI in the country.
Under Resolution 2/2023, Argentina released recommendations for trustworthy and reliable AI directed to the public sector. Argentina's Public Information Access Agency released a Guide to Responsible AI. This document focuses primarily on three points: monitoring global and domestic developments in AI governance, promoting cooperation between public and private entities, and improving AI literacy in key areas, such as transparency and data protection.
In August 2024, Argentina's congress started debating legislation to regulate the use of AI. It is expected to be modeled after the EU AI Act, which uses a risk-based approach to define obligations for providers and deployers of AI-based systems.
-
expand_more
Relevant authorities
- National Big Data Observatory
- Ministry of Science, Technology and Productive Innovation
- National Committee for Ethics in Science and Technology
- Undersecretariat of Information and Communication Technologies
- Agency of Access to Public Information National Securities Commission
- National Securities Commission
-
expand_more
Other relevant laws and policies
- National Cybersecurity Strategy [In force.]
- Personal Data Protection Act [Draft update to original law.]
- Law 27,699 for the Protection of Individuals with respect to Automatic Processing of Personal Data [In force.]
- Central Bank Communication A 7724 [In force.]
- Provision 18/2015 Guide to Good Privacy Practices for Application Development [In force.]
-
expand_more
Wider AI context
- Argentina is a party to the Organisation for Economic Co-operation and Development's AI principles. See the OECD's Policy Observatory.
- Argentina adopted UNESCO's Recommendation on the Ethics of AI.
- See Argentina's Digital Agenda 2030.
- See Argentina's Fintech Innovation Hub.
- Argentina's data protection authority, the Agency of Access to Public Information, published Resolution No. 161/23, which created the Transparency and Protection of Personal Data Program in the use of AI.
- The president's chief of staff also issued Administrative Decision No. 750/2023, creating the Interministerial Roundtable on AI.
- Argentina was the only G20 nation not to sign onto the Statement on Regulation of AI.
- Argentina plans to become a regional AI hub, including by adding nuclear power to meet demand for new AI data centers.
-
-
expand_more
Australia
-
expand_more
Specific AI governance law or policy
Australian-first AI plan to boost capability will be developed with the aim of growing investment, strengthening AI capabilities, boosting AI skills and securing economic resilience.
In August 2024, the Australian Department of Industry, Science and Resources released the Voluntary AI Safety Standard. This standard builds on the 2023 discussion paper "Safe and Responsible AI in Australia" to support and promote consistency among best practices when developing AI. While not mandatory, the standard consists of 10 guardrails, including testing, transparency and accountability requirements. In October 2024, Australia released the AI Impact Navigator, a framework for companies to assess and measure impact and outcomes of their AI systems.
In September 2024, Australia's Digital Transformation Agency released its policy for the responsible use of AI in government. In this document, the government recognizes the potential benefits of AI and notes the public expects the government to use the technology safely and responsibly. According to the policy, government agencies must adopt several governance measures, such as naming an accountable official.
In November 2024, Australia released a committee report, recommending “new, whole-of-economy, dedicated legislation to regulate high-risk uses of AI” that would mandate guardrails and shore up existing legislation to ensure worker and creative license holders’ rights.
Australia reformed its Privacy Act, which will provide greater transparency for individuals affected by automated decisions. See the IAPP's article on the new privacy reform's top operations impacts.
-
expand_more
Relevant authorities
- Department of Industry, Science and Resources
- Commonwealth Scientific and Industrial Research Organisation
- Office of the eSafety Commissioner
- Office of the Australian Information Commissioner
- Competition and Consumer Commission
- National AI Centre's Responsible AI Network
- National Science and Technology Council
-
expand_more
Other relevant laws and policies
- Patents Act [In force.]
- Copyright Act [In force.]
- Privacy Act [In force.]
- Data Availability and Transparency Act [In force.]
- Consumer Data Right [In force.]
- Competition and Consumer Act [In force.]
- Compliance and Enforcement Policy for the Consumer Data Right
Australia was one of the first countries in the world to adopt AI ethics principles, which include a robust ethics framework:
-
expand_more
Wider AI context
- Australia is a party to the OECD's AI principles. See the OECD's Policy Observatory.
- Australia participated in the 2023 U.K. AI Summit, which led to the Bletchley Declaration.
- Australia adopted UNESCO's Recommendation on the Ethics of AI.
- See Australia's 2025 Digital Transformation Strategy.
- The government announced it will set up an advisory body of industry and academic experts to help it devise a legislative framework around "high risk" AI applications.
- The Human Technology Institute at the University of Technology Sydney recently released The State of AI Governance in Australia.
- See the National Science and Technology Council's Rapid Response Information Report on generative AI.
- In March 2020, the government released the AI Standards Roadmap: Making Australia's Voice Heard. This separate roadmap was developed by Standards Australia and commissioned by the Australian Department of Industry, Science, Energy and Resources. The roadmap's primary goal is to "ensure Australia can effectively influence AI standards development globally."
- Australia bans DeepSeek on all federal government devices.
- Australia and Singapore signed a memorandum of understanding to deepen cooperation on AI.
- Australia signed the AI Action Summit joint statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet.
-
-
expand_more
Bangladesh
-
expand_more
Specific AI governance law or policy
Bangladesh is looking to advance it's AI policies and has published a National AI Strategy for 2019-2024. The strategy includes:
- Creating strategy and development roadmaps.
- Overcoming challenges with the use of AI.
- Leveraging AI for social and economic growth, and more.
-
expand_more
Relevant authorities
-
expand_more
Other relevant laws and policies
- Digital Security Act [In force.]
- Constitution of the People's Republic of Bangladesh [In force.]
- Right to Information Act [In force.]
- Copyright Act [In force.]
- Telecommunications Act [In force.]
-
expand_more
Wider AI context
- Bangladesh adopted UNESCO's Recommendation on the Ethics of AI.
- See Digital Bangladesh.
-
-
expand_more
Brazil
-
expand_more
Specific AI governance law or policy
Brazil published an AI Strategy, as well as a summary. The strategy proposes to finance research projects that apply ethical solutions, establish technical requirements that advance ethical applications, develop techniques to mitigate algorithmic bias, create parameters around human intervention where automated decisions may create high-risk situations, and implement codes of conduct to encourage traceability and safeguard legal rights. Brazil also strives to encourage data sharing per its data protection law, the LGPD, create an AI observatory for measuring impact and disseminate opensource codes for identifying discriminatory trends.
Brazil's Senate has approved Bill 2338/2023, a comprehensive AI bill that emphasizes human rights and creates a civil liability regime for AI developers. The lower chamber will next review the bill. The proposed AI bill would:
- Prohibit certain "excessive risk" systems.
- Establish a regulatory body to enforce the law.
- Create civil liability for AI providers.
- Require reporting obligations for significant security incidents.
- Guarantee various individual rights, such as explanation, nondiscrimination, rectification of identified biases and due process mechanisms.
In July 2023, the country's DPA, the Autoridade Nacional de Proteção de Dados, published a Preliminary Analysis of Bill No. 2338/2023, which provides for the use of AI in Brazil. Further, the ANPD has now published its final opinion on Bill 2338/2023.
-
expand_more
Relevant authorities
-
expand_more
Other relevant laws and policies
- General Data Protection Act [In force.]
- Civil Rights Framework for the Internet [In force.]
- Consumer Protection Code [In force.]
-
expand_more
Wider AI context
- Brazil is a party to the OECD's AI principles. See the OECD's Policy Observatory and article on Brazil's path to responsible AI.
- Brazil participated in the 2023 U.K. AI Summit, which led to the Bletchley Declaration.
- Brazil adopted UNESCO's Recommendation on the Ethics of AI.
- See Brazil's Digital Transformation Strategy.
- The ANPD entered into a technical cooperation agreement with the Development Bank of Latin America "to develop an experimental regulatory tool" for AI-related innovation.
- Brazil committed to investing USD4 billion in domestic AI capabilities through its AI investment plan.
- ANDP and France’s DPA, the Commission nationale de l’informatique et des libertés, met to strengthen international cooperation on data protection, artificial intelligence and digital education.
-
-
expand_more
Canada
-
expand_more
Specific AI governance law or policy
Canada’s AI and Data Act failed to proceed through the House of Commons.
Canada published a code of practice for generative AI development and use in anticipation of, and to assure compliance with, the AI and Data Act.
The country also issued a Directive on Automated Decision-Making, which imposes several requirements on the federal government's use of automated decision-making systems.
Canada launched its AI Safety Institute in November. The CAISI’s stated goals are to direct AI research and government directed projects regarding AI Safety, furthering Canada’s role in global AI safety initiatives.
Canada’s Competition Bureau released a report in January containing its findings from a public consultation on how AI will affect competition in Canada. The report found that AI’s rapid growth can create a host of opportunities, but risks and concerns over anti-competitive conduct come with them.
-
expand_more
Relevant authorities
-
expand_more
Other relevant laws and policies
- Personal Information Protection and Electronic Documents Act [In force.]
- Privacy Act [In force.]
- Consumer Product Safety Act [In force.]
- Food and Drugs Act [In force.]
- Motor Vehicle Safety Act [In force.]
- Bank Act [In force.]
- Human Rights Act [In force.]
- Criminal Code [In force.]
- Quebec's Law 25: An Act to modernize legislative provisions as regards the protection of personal information [In force.]
- Genetic Non-Discrimination Act [In force.]
-
expand_more
Wider AI context
- Canada is a party to the OECD's AI principles. See the OECD's Policy Observatory.
- Canada also participated in the 2023 U.K. AI Summit, which led to the Bletchley Declaration.
- As part of the G7, Canada endorsed the 11 Hiroshima Process International Guiding Principles for Advanced AI systems.
- Canada also adopted UNESCO's Recommendation on the Ethics of AI.
- According to its AI Strategy, by 2030 Canada plans to achieve an AI ecosystem founded on scientific excellence, exceptional training and talent pools, public-private collaboration, and commitment to AI technologies which produce positive social, economic and environmental change for people and the planet. In achieving these goals, Canada has established three AI institutes: Amii in Edmonton, Mila in Montreal, and the Vector Institute in Toronto.
- The House of Commons' Standing Committee on Industry, Science and Technology issued a report for various AI recommendations in 2019.
- In September, a collaborative effort between researchers from Canada, the U.K. and U.S. explored the risks and benefits of utilizing AI in the nuclear industry.
- Canada joined Australia, Japan, the U.K. and U.S. in drafting a set of principles to guide adoption of AI in the telecommunications industry. These principles focused on AI growth, security and overall societal benefits.
- Canada was one of many nations to sign a Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet, which emphasized the nation’s prioritization of ethical and equitable innovation for AI.
- The Ministry of Innovation, Science and Industry released a guide for managers of AI systems to support implementation of the voluntary code of conduct.
- Canada signed the AI Action Summit joint statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet.
-
-
expand_more
Chile
-
expand_more
Specific AI governance law or policy
In October 2021, Chile published it's first National Policy and Action Plan on AI. The country's previous Minister of Science, Technology, Knowledge and Innovation Andrés Couve explained the policy is built on the following:
- Development of enabling factors.
- Use and development of AI technology.
- Aspects of ethics and safety.
In May 2024, Chile introduced a draft AI legislation that promotes AI while ensuring human rights. The risk-based legislation also promotes self-regulation.
-
expand_more
Relevant authorities
-
expand_more
Other relevant laws and policies
- Digital Economy Partnership Agreement [In force.]
- Political Constitution of the Republic of Chile [In force.]
- Law No. 19,628 on the Protection of Private Life [In force.]
- Law No. 20,285 on the Transparency of Public Functions and Access to Information on Public Administration [In force.]
- Law 21,180 on Digital Transformation of the State [In force.]
- Industrial Property Law No. 19,039 [In force.]
- Law No. 17,336 on Intellectual Property [In force.]
- Fintech Law [In force.]
- Personal Data Protection Bill No. 11,144-07 [Draft.]
-
expand_more
Wider AI context
- Chile is a party to the OECD's AI principles. See the OECD's Policy Observatory.
- Chile participated in the 2023 U.K. AI Summit, which led to the Bletchley Declaration.
- Chile also adopted UNESCO's Recommendation on the Ethics of AI.
- See Chile's 2035 Digital Transformation Strategy.
- In 2023, Chile hosted the first Latin American and Caribbean Ministerial and High Level Summit on the Ethics of AI, with support from UNESCO and CAF.
- The Inter-American Development Bank supported the Chilean government's project to develop new transport technology applications, specifically focusing on big data and autonomous vehicles.
- On 11 Feb. 2025, Chile endorsed the Paris Charter on Artificial Intelligence. The charter recognized principles of openness and collaboration, accountability and transparency in AI governance.
- Chile signed the AI Action Summit joint statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet.
-
-
expand_more
China
-
expand_more
Specific AI governance law or policy
China has been proactive in adopting legislation and regulations around the use of AI, with several national laws currently in place. Currently, the laws, regulations, and policies governing AI in China are specific to AI use cases. These include:
- Algorithmic Recommendation Management Provisions [In force.]
- Interim Measures for the Management of Generative AI Services [In force.]
- Deep Synthesis Management Provisions [In force.]
- AI guidelines and summary of regulations [In force.]
- Scientific and Technological Ethics Regulation [In force.]
- Next Generation AI Development Plan [In force.]
China established an AI standards committee, drawing members from industry, such as Baidu, Alibaba and Tencent.
-
expand_more
Relevant authorities
-
expand_more
Other relevant laws and policies
- Cybersecurity Law [In force.]
- Data Security Law [In force.]
- Personal Information Protection Law [In force.]
- Shenzhen Special Economic Zone AI Industry Promotion Regulation [In force.]
-
expand_more
Wider AI context
- China is a party to the G20 AI Principles, which are drawn from the OECD's AI principles. See the OECD's Policy Observatory.
- China participated in the 2023 U.K. AI Summit, which led to the Bletchley Declaration.
- China also adopted UNESCO's Recommendation on the Ethics of AI.
- See China's AI development plan.
- See the Ministry of Science and Technology's 2021 AI governance document on ethical norms for AI use.
- China led a successful UN resolution on AI.
- In July 2024, China released the Shanghai Declaration on Global AI Governance, which calls for global cooperation in developing AI "while ensuring safety, reliability, controllability and fairness in the process, and encourage leveraging AI technologies to empower the development of human society."
- In September 2024, China released the AI Safety Governance Framework as part of its Global AI Governance Initiative. This framework lays out China's objectives for international cooperation on AI governance as well as its view on the risks AI poses to safety.
- China’s AI development depends on imported advanced-computing chips, which the U.S. has sought to deny access to, in addition to developing its domestic advanced chip production capacity. Most recently, the TSMC complied with an export stop order from the U.S. Department of Commerce after their chips were found in Huawei’s AI processor. Related to this, China launched an investigation into Nvidia, the chipmaker powering many of the most advanced AI models, citing anti-monopoly laws. DeepSeek, nevertheless, was able to develop a cutting-edge model that benchmarks with OpenAI’s most advanced models. It was able to do so with limited access to imported chips and at a much lower cost.
- Former U.S. President Joe Biden and President Xi Jinping met in November 2024 to discuss the need to address and mitigate risks around the use of AI.
- In March 2025, the Cyberspace Administration of China has issued guidance for identification of AI-generated synthetic content, and, together with the Ministry of Public Security, guidance on Security Management Measures for Facial Recognition Technology Applications.
- China signed the AI Action Summit joint statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet.
-
-
expand_more
Colombia
-
expand_more
Specific AI governance law or policy
Colombia has various policies addressing AI governance, including the following:
-
expand_more
Relevant authorities
-
expand_more
Other relevant laws and policies
- Personal Data Protection Law [In force.]
- Habeas Data Law, Law 1266 amended by Law 2157 of 2021 [In force.]
- Decree 338 [In force.]
-
expand_more
Wider AI context
- Colombia is a party to the OECD's AI principles. See the OECD's Policy Observatory.
- Colombia also adopted UNESCO's Recommendation on the Ethics of AI.
- Colombia published an Ethical Framework that reiterates best practices, suggestions and recommendations on how best to integrate ethical principles with the use of AI in projects primarily for the benefit of the public sector entities.
- An AI Task Force was created in partnership with the CAF to bolster AI progress.
-
-
expand_more
Egypt
-
expand_more
Specific AI governance law or policy
Egypt's National AI Strategy focuses on four pillars:
- AI for government.
- AI for development.
- Capacity building.
- International activities.
The country's other initiatives include an AI roadmap and Charter for Responsible AI.
In January 2025, Egypt’s Ministry of Communications and Information Technology released the second edition of its National AI Strategy, which outlines the nation’s projected strategic intent for the development and implementation of AI over the next five years.
-
expand_more
Relevant authorities
-
expand_more
Other relevant laws and policies
-
expand_more
Wider AI context
- Egypt is a party to the OECD's AI principles. See the OECD's Policy Observatory.
- Egypt also adopted UNESCO's Recommendation on the Ethics of AI.
- Egypt chaired several meetings for the Arab AI Working Group, which allows representatives from Arab countries to discuss AI strategies. See the group's chair election, second meeting and third meeting.
- See the Applied Innovation Center.
- The Senate Education Committee stressed the urgency of issuing a document to evaluate the ethics and control of AI in Egypt.
-
-
expand_more
European Union
-
expand_more
Specific AI governance law or policy
On 1 Aug. 2024, the EU AI Act entered into force. Various dates for compliance will apply in the coming years, with the first applications of the act in early 2025 and the last at the end of 2030. On 2 Feb. 2025, the first set of obligations came into force.
In brief, the act:
- Creates harmonized rules for placing AI on the EU market.
- Applies to the EU and any third-country providers and deployers that place AI systems on the EU market.
- Centers around a risk-based approach.
- Prohibits use of certain AI systems and provides specific requirements for high-risk systems.
- Creates harmonized transparency rules for certain AI systems.
Currently, the following obligations are in place:
- Organizations will need to ensure employees have access to AI literacy.
- The prohibition on certain AI systems is in force, although enforcement or penalties have not yet begun.
- Guidance on the definition for AI systems has been published.
- The EU published the third draft of the General-Purpose AI Code of Practice in consultation with independent experts and stakeholders.
The EU commits to investing 200 billion euros in artificial intelligence through InvestAI initiative, with 20 billion euros earmarked for AI gigafactories. The first wave of factories was designated in December 2024 and the second wave in March 2025. This initiative was later expanded upon in the Commission’s AI Continent Action Plan, which seeks to strengthen the EU’s AI capabilities. The Commission, as part of this plan, will open an AI Act Service Desk, to ensure smooth implementation of the AI Act.
The IAPP and its partners have worked diligently to analyze the EU AI Act and its implications for organizations. For more insight, check out the IAPP series on the top 10 operations impacts of the EU AI Act and the EU AI Act: 101 chart.
-
expand_more
Relevant authorities
- EU AI Office
- EU AI Board
- European Data Protection Board
- Special Committee on AI in a Digital Age
- EDPB's ChatGPT Task Force
- Member states must establish or designate at least one authority for AI Act notifications and surveillance by 2 Aug.
- Member state AI authorities, for example:
- Member state DPAs, for example:
- France's Commission nationale de l'informatique et des libertés
- Germany's Federal Commissioner for Data Protection and Freedom of Information
- Italy's Garante
- Spain's Agencia Española de Protección de Datos
- Belgium's DPA
- Poland's Urząd Ochrony Danych Osobowych
- Austria's DPA
- Hungary's National Authority for Data Protection and Freedom of Information
-
expand_more
Other relevant laws and policies
- General Data Protection Regulation [In force.]
- Digital Services Act [In force.]
- Digital Markets Act [In force.]
- AI Liability Directive [Dropped.]
- EU Cyber Resilience Act [In force.]
- Ethics guidelines for trustworthy AI [In force.]
- New Product Liability Directive [Draft.]
-
expand_more
Wider AI context
- The EU is a party to the OECD's AI principles. See the OECD's Policy Observatory.
- The EU participated in the 2023 U.K. AI Summit, which led to the Bletchley Declaration.
- As a nonenumerated member of the G7, the EU endorsed the 11 Hiroshima Process International Guiding Principles for Advanced AI systems.
- The EU also adopted UNESCO's Recommendation on the Ethics of AI.
- See the EU's approach and timeline for AI development.
- Member states and the European Commission worked to create a Coordinated Plan on AI in 2018. This plan includes a table showcasing how 23 of 27 EU countries, as well as Norway and Switzerland, have progressed with their national strategies. The coordinated plan, updated in 2021, builds on the original 2018 plan.
- In January 2024, the European Commission decided to establish an EU AI Office, to "ensure the development and coordination of AI policy at European level, as well as supervise the implementation and enforcement of the forthcoming AI Act."
- Along with the U.S., the U.K., Israel, and the EU have has signed onto the Council of Europe's Framework Convention on AI and human rights, democracy, and the rule of law.
- Some EU member states have national AI strategies, many of which emphasize research, training and labor preparedness, as well as multistakeholder and international collaboration. For example, France's national AI strategy lays out three main objectives:
- Improve the AI education and training ecosystem.
- Establish an open data policy for implementing AI applications and pooling assets.
- Develop an ethical framework for fair and transparent use of AI.
- Singapore signed an agreement to cooperate between the AI Office and Singapore’s AI Safety Institute.
- DeepSeek has been investigated by several European authorities.
- EU signed the AI Action Summit joint statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet.
-
-
expand_more
India
-
expand_more
Specific AI governance law or policy
A proposed Digital India Act would replace the IT Act of 2000 and regulate high-risk AI systems. The Indian government has advocated for a robust, citizen-centric and inclusive "AI for all" environment. A task force has been established to make recommendations on ethical, legal and societal issues related to AI, and to establish an AI regulatory authority.
According to its National Strategy for AI, India hopes to become what it calls an "AI garage" for emerging and developing economies, where scalable solutions can be easily implemented and designed for global deployment.
India’s government is open to new regulation for AI, but wants to achieve consensus. To this effect, India released a report on AI governance guidelines, pointing to recommendations for a future regulatory framework. At the same time, India is developing standards for organizations to adhere to that lay out expectations for reliability, explainability, transparency, privacy and security. In November 2024, India released the Developer’s Playbook for Responsible AI in India, detailing the government’s unified industry framework for AI risk identification and mitigation.
-
expand_more
Relevant authorities
-
expand_more
Other relevant laws and policies
- Information Technology Act [In force.]
- The Information Technology Rules [In force.]
- Competition Act [In force.]
- Motor Vehicles Act [In force.]
- Digital Personal Data Protection Act [In force.]
- Copyright Act [In force.]
- National e-Governance Plan [In force.]
-
expand_more
Wider AI context
- India is a party to the G20 AI Principles, which are drawn from the OECD's AI principles. See the OECD's Policy Observatory.
- India participated in the 2023 U.K. AI Summit, which led to the Bletchley Declaration.
- India also adopted UNESCO's Recommendation on the Ethics of AI.
- NITI Aayog, the government's public policy think tank, launched the AI Research, Analytics and knowledge Assimilation platform to elaborate on AI requirements in India.
- See India AI, an umbrella program of the Ministry of Electronics and Information Technology.
- India signed the AI Action Summit joint statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet.
-
-
expand_more
Indonesia
-
expand_more
Specific AI governance law or policy
In 2020, Indonesia released the National Strategy on AI as part of the AI Towards Indonesia's Vision 2045. The following five national priorities were outlines as where AI is anticipated to have the biggest impact:
- Health services.
- Bureaucratic reform.
- Education and research.
- Food security.
- Mobility and smart cities.
Further, Indonesia released a Circular on AI Ethics. While not binding, it provides a reference point for formulating and establishing internal company policies for Indonesia's AI industry. Since issuing the circular, the Ministry of Communication and Informatics committed to preparing specific regulations regarding AI use and development.
The Indonesian Government announced in January 2025 that it was in the process of developing comprehensive AI legislation. Officials have previously indicated that its AI utilization strategy focused on five key areas: health, bureaucratic reform, education, urban development and food security.
-
expand_more
Relevant authorities
-
expand_more
Other relevant laws and policies
- Law No. 27 of 2022 on Personal Data Protection [In force.]
- Electronic Information Law [In force.]
- Article 40 of Law No. 36 of 1999 regarding Telecommunications [In force.]
- Law No. 14 of 2008 on Public Information Transparency [In force.]
- Copyright Act [In force.]
-
expand_more
Wider AI context
- Indonesia is a party to the G20 AI Principles, which are drawn from the OECD's AI principles. See the OECD's Policy Observatory.
- Indonesia participated in the 2023 U.K. AI Summit, which led to the Bletchley Declaration.
- Indonesia also adopted UNESCO's Recommendation on the Ethics of AI.
- See Indonesia's roadmap for industry, Making Indonesia 4.0.
- Indonesia’s Ministry of Communications and Informatics has partnered with UNESCO and completed an AI Readiness Assessment.
- Indonesia signed the AI Action Summit joint statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet.
-
-
expand_more
Israel
-
expand_more
Specific AI governance law or policy
Based on a policy for regulation and ethics in AI, Israel wants to form a uniform risk-management tool, establish a governmental knowledge and coordination center, and maintain involvement in international regulation and standardization. In general, voluntary standardization, sector-based self-regulation and modular experimentation tools, e.g., sandboxes, will be favored over a lateral framework. The following resources are available for policy guidance:
- Israeli AI Regulation and Policy White Paper: A First Glance.
- Harnessing Innovation: Israeli Perspectives on AI Ethics and Governance.
- Policy on AI Regulation and Ethics.
In September 2024, Israel’s Ministry of Innovation, Science and Technology called for experts to assist in AI policy development.
In November 2024, the Israeli government released a report for public comment on the use of AI in the private sector. The report advocates for development of AI for use in finance and a flexible approach to its regulation.
In April 2025, Israel’s Privacy Protection Authority released draft guidance for AI governance and privacy. This document summarizes obligations when using personal data in the context of AI tools.
-
expand_more
Relevant authorities
-
expand_more
Other relevant laws and policies
- Basic Law: Human Dignity and Liberty [In force.]
- Privacy Protection Law [In force.]
- Data Security Regulation [In force.]
- Credit Data Law [In force.]
- Copyright Act [In force.]
-
expand_more
Wider AI context
- Israel is a party to the OECD's AI principles. See the OECD's Policy Observatory.
- Israel participated in the 2023 U.K. AI Summit, which led to the Bletchley Declaration.
- Israel's Ministry of Justice issued an opinion that machine learning will fall under the fair-use provision in the country's Copyright Act.
- Along with the U.S., the U.K., Israel, and the EU have has signed onto the Council of Europe's Framework Convention on AI and human rights, democracy, and the rule of law.
- Israel also contributed to a Joint Statement on Data Scraping with the Canadian government.
-
-
expand_more
Japan
-
expand_more
Specific AI governance law or policy
In 2022, Japan released a National AI Strategy. Japan promotes the notion of “agile governance,” whereby the government provides nonbinding guidance and defers to the private sector’s voluntary efforts to self-regulate. To this effect, Japan’s government approved a new draft bill in February 2025 This bill presents a light touch on regulation and seeks to further AI innovation by requiring companies to cooperate with government safety measures. Further, when a company’s use of AI involves the violation of human rights, the law permits the Japanese government to publicly list the names of the companies involved. As of April 2025, the bill has passed the lower chamber and will be voted on in the upper chamber.
The following white papers have been issued for policy guidance:
- AI Governance in Japan Ver. 1.1.
- Governance Guidelines for Implementation of AI Principles.
- AI Utilization Guidelines, an initiative for implementing the OECD AI Principles.
In 2023, the AI Strategy Council released draft AI Operator Guidelines, which clarify how operators should develop, provide and use AI.
In May 2024, Japan introduced draft legislation that would require various disclosures by developers and safeguard human rights.
In September 2024, Japan’s AI Safety Institute released two policies governing AI usage and development. A Guide to Red Teaming educates developers on adversarial techniques they can use to improve the safety of their AI models. The Guide to Evaluation Perspectives provides basic concepts for developers to use when conducting AI Safety Evaluations. These documents were drafted as a part of Japan’s Hiroshima AI Process initiative.
In October 2024, the Japan Federal Trade Commission released an Request for Information about the rapidly evolving generative AI market. The JFTC is looking for information from businesses involved in three market layers: infrastructure, such as computing resources, model — developers of generative AI models — and application — services using generative AI.
In December 2024, the Office of the Prime Minister’s AI Strategy Council Released an interim report detailing its targets for artificial intelligence research and development policy. The council highlighted the importance of balancing opportunities with risks, identifying four intervention points: support for research and development initiatives; application of law and policy; promoting effective risk management procedures; and cooperation with international initiatives.
-
expand_more
Relevant authorities
-
expand_more
Other relevant laws and policies
- Improving Transparency and Fairness of Digital Platforms Act [In force.]
- Financial Instruments and Exchange Act [In force.]
- Protection of Personal Information Act [In force.]
- Antimonopoly Act [In force.]
- Product Liability Act [In force.]
- Copyright Law [In force.]
-
expand_more
Wider AI context
- Japan is a party to the OECD's AI principles. See the OECD's Policy Observatory.
- Japan participated in the 2023 U.K. AI Summit, which led to the Bletchley Declaration.
- As part of the G7, Japan endorsed the 11 Hiroshima Process International Guiding Principles for Advanced AI systems.
- Japan also adopted UNESCO's Recommendation on the Ethics of AI.
- The Social Principles of Human-Centric AI, drafted by the Council for Social Principles of Human-Centric AI, describe AI's role in Japan's "Society 5.0" and advocates that AI should be human-centric; promote education/ literacy; protect privacy; ensure security; maintain fair competition; ensure fairness, accountability and transparency; and promote collaborative innovation.
- Minister of Education, Culture, Sports, Science and Technology Keiko Nagaoka declared the country's copyright laws cannot be enforced on materials used in AI training datasets.
- Japan's Ministry of Economy, Trade and Industry introduced the Contract Guidelines for AI and Data Use to assist parties involved in AI business transactions.
- See the Draft AI Research and Development Guidelines for International Discussions.
- Japan signed the AI Action Summit joint statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet.
-
-
expand_more
Mauritius
-
expand_more
Specific AI governance law or policy
Mauritius published an AI Strategy. The strategy goes in depth on the benefits and challenges of AI, specifically how AI impacts the country's various industries, and sets out a clear vision for development of AI.
Other initiatives from the Mauritius government include:
-
expand_more
Relevant authorities
-
expand_more
Other relevant laws and policies
- Financial Services (Robotic and AI Enabled Advisory Services) Rules [In force.]
- Data Protection Act [In force.]
- National Cyber Security Strategy [In force.]
- Cybersecurity and Cybercrime Act [In force.]
- Industrial Property Act [In force.]
- Copyright Act [In force.]
- Protection against Unfair Practices (Industrial Property Rights) Act [In force.]
-
expand_more
Wider AI context
- Mauritius also adopted UNESCO's Recommendation on the Ethics of AI.
- See the Digital Mauritius 2030 strategic plan.
- In 2019, the Minister of Technology, Communication and Innovation officially opened the workshop, Leading Innovation in Business and Government Services through AI, which is organized by the Mauritius Research and Innovation Council.
-
-
expand_more
New Zealand
-
expand_more
Specific AI governance law or policy
Many New Zealand government agencies are signatories to the Algorithm Charter, which sets out a series of ethical commitments around the development and use of algorithms. The charter provides a risk matrix to assess the likelihood and impact of algorithmic applications. The New Zealand government generally prioritizes trustworthy and human-centric AI development.
Although there is no comprehensive AI regulation in New Zealand, the current Privacy Act 2020 applies to the use of AI systems in the country. The Office of the Privacy Commissioner issued guidance on compliance with privacy law when using generative AI tools, as well as a summary. Further, the Office of the Privacy Commissioner published the Privacy Commissioner's expectations around generative AI in June 2023.
In July 2024, New Zealand's Ministry of Business, Innovation and Employment released a cabinet paper that outlines its approach to AI regulation. In it, the ministry noted, "we need to state our support for increased uptake of AI in New Zealand and be clear that we will take a light-touch, proportionate and risk-based approach to AI regulation."
The Law, Society and Ethics Working Group published a set of guiding Trustworthy AI in Aotearoa principles designed to provide direction for AI stakeholders.
In December, New Zealand’s Privacy Commissioner announced its intention to issue regulation on biometrics, along with compliance guidance indicating the regulation may impact some AI use cases.
-
expand_more
Relevant authorities
-
expand_more
Other relevant laws and policies
- Privacy Act [In force.]
- Bill of Rights Act [In force.]
- Treaty of Waitangi [In force.]
- Human Rights Act [In force.]
- Māori Data Sovereignty Principles
- Māori Data Governance Model
-
expand_more
Wider AI context
- New Zealand is a party to the OECD's AI principles. See the OECD's Policy Observatory.
- New Zealand also adopted UNESCO's Recommendation on the Ethics of AI.
- The New Zealand government released AI cornerstones, which will inform an eventual national AI strategy.
- See the AI Forum of New Zealand.
- "An example of governance for AI in health services from Aotearoa New Zealand" published on nature.com has been recognised for its approach in the health sector, particularly in terms of prioritising the voice of Māori.
- The Office of the Privacy Commissioner is currently conducting consultation on a Biometrics Privacy Code of Practice under the Privacy Act to regulate the use of biometric technologies. If enacted, that code of practice will have the force of law under the Privacy Act.
- The Department of Internal Affairs published initial advice on Generative AI in the public service.
- In February 2025, New Zealand Internal Affairs released guidance for the responsible use of generative AI in the private sector. The document focuses on safety, privacy and accountability for generative AI implementations.
- New Zealand signed the AI Action Summit joint statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet.
-
-
expand_more
Nigeria
-
expand_more
Specific AI governance law or policy
In April 2024, Nigeria hosted a workshop to devise a national AI strategy, where Minister of Communications, Innovation and Digital Economy Bosun Tijani stated to the country's goal is to become a key player in global regulation and development of AI.
In August 2024, the country released its draft national AI strategy, which recognizes the benefits and risks of widespread adoption of AI. Nigeria plans to address the ethical issues of using AI while embracing it as a driver of socioeconomic growth.
In November, the Nigerian House of Representatives introduced a bill to regulate AI usage and control in Nigeria. According to commentators, this is the third such bill to descend from the House, causing legislators to call for harmonization.
-
expand_more
Relevant authorities
-
expand_more
Other relevant laws and policies
-
expand_more
Wider AI context
- Nigeria adopted UNESCO's Recommendation on the Ethics of AI.
- Nigeria participated in the 2023 U.K. AI Summit, which led to the Bletchley Declaration.
- In 2020, the Nigerian Communications Commission released a research paper on the ethical and societal impacts of AI.
- In October 2024, the Director General of Nigeria’s National Information Technology Development Agency called for improvements to local capacity, including talent initiatives and support for research.
- Nigeria joined the countries endorsing the Paris Charter on Artificial Intelligence in February 2025.
- Nigeria signed the AI Action Summit joint statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet.
-
-
expand_more
Peru
-
expand_more
Specific AI governance law or policy
Peru has drafted legislation around the use of AI, including law 3814, which would promote the use of AI "in favor of the economic and social development of the country." The law includes the following principles:
- Risk-based security standards.
- Multi-stakeholder approach.
- Internet governance.
- Digital society.
- AI privacy.
Peru also developed a National AI Strategy that aids in the promotion, development and adoption of AI in the country. The first draft includes a roadmap, goals, definitions and external context examples to further develop the strategy.
-
expand_more
Relevant authorities
- Secretariat of Government and Digital Transformation
- Presidency of the Council of Ministers
- National Directorate of Intelligence
- Superintendence of Banking, Insurance and Pension Fund Administration
- Ministry of Justice and Human Rights
- National Authority for the Protection of Personal Data
- National Authority for Transparency, Access to Public Information and Protection of Personal Data
-
expand_more
Other relevant laws and policies
- Supreme Decree No. 157- 2021-PCM [In force.]
- Supreme Decree No. 003- 2013-JUS [In force.]
- Personal Data Protection Law No. 29733 [In force.]
- Law of Transparency and Access to Public Information [In force.]
- Finance Regulation for Information Security and Cybersecurity [In force.]
- Cyber Defense Law No. 30999 [In force.]
- Law 30096 on Computer Crime [In force.]
- Financial sector Cybersecurity Framework [In force.]
- Copyright Law, Legislative Decree 822 [In force.]
-
expand_more
Wider AI context
Peru is a party to the OECD's AI principles. See the OECD's Policy Observatory.
Peru also adopted UNESCO's Recommendation on the Ethics of AI.
-
-
expand_more
Saudi Arabia
-
expand_more
Specific AI governance law or policy
Saudi Arabia has a National Strategy on Data and AI, which provides a welcoming, flexible and stable regulatory framework, including incentive schemes, to attract AI companies, investors and talents. According to the strategy, Saudi Arabia aspires to be one of the leading economies utilizing and exporting data and AI after 2030.
It is ready to leverage its "young and vibrant population" and "unique centralized ecosystem." The country hopes to attract outside investment by hosting global AI events and applying its influence as a tech hub within the Middle East.
-
expand_more
Relevant authorities
-
expand_more
Other relevant laws and policies
- Personal Data Protection Law [In force.]
- Data Management and Personal Data Protection Standards [In force.]
- Children and Incompetents' Data Protection Policy [In force.]
- Data Classification Policy [In force.]
- General Rules for the Transfer of Personal Data outside the Geographical Borders of the Kingdom [In force.]
- Data Sharing Policy [In force.]
- Freedom of Information Policy [In force.]
- Open Data Policy [In force.]
-
expand_more
Wider AI context
- Saudi Arabia is a party to the G20 AI Principles, which are drawn from the OECD's AI principles. See the OECD's Policy Observatory.
- Saudi Arabia also adopted UNESCO's Recommendation on the Ethics of AI.
- The government of Saudi Arabia in collaboration with the Saudi Data and AI Authority signed a memorandum of understanding to create an AI center dedicated to the energy segment.
- In September 2024, the Saudi Data and Artificial Intelligence Authority released for public comment a set of guidelines for users, regulators and consumers of deepfake technology.
- In September 2024, the SDAIA signed a memorandum of understanding with the OECD to strengthen AI incident monitoring in the Middle East by enabling OECD monitoring tools to track data in Arabic. This measure ensures that Arabic-speaking nations can effectively leverage OECD’s AI monitoring tools.
- Later in September, the SDAIA partnered with Microsoft to improve the availability of the SDAIA’s Arabic LLM on Microsoft Azure and improve SDAIA’s local talent initiatives.
-
-
expand_more
Singapore
-
expand_more
Specific AI governance law or policy
Singapore, through its Personal Data Protection Commission and AI Verify Foundation, developed voluntary governance frameworks and initiatives for ethical AI deployment, data management and sectoral implementation, including:
- Model AI Governance Framework for Generative AI.
- Model AI Governance Framework.
- National AI Programmes in Government and Finance.
- Veritas Initiative, an implementation framework for AI governance in the financial sector.
- AI Verify, a governance testing toolkit.
- IPOS International, part of the Intellectual Property Office of Singapore that realizes customized IP solutions.
- Proposed Advisory Guidelines on Use of Personal Data in AI Recommendation and Decision Systems.
- Principles to Promote Fairness, Ethics, Accountability and Transparency in the Use of AI and Data Analytics in Singapore's Financial Sector.
- Implementation and Self-Assessment Guide for Organizations, a companion to the Model AI Governance Framework.
- The Monetary Authority of Singapore conducted a thematic review of bank AI model risk management practices in 2024, culminating in a paper with good practices for AI and generative AI model risk management.
- The AI Verify Foundation is starting a Global AI Assurance Pilot to understand emerging norms and best practices around technical testing “assurance” or generative AI applications.
-
expand_more
Relevant authorities
-
expand_more
Other relevant laws and policies
- Personal Data Protection Act [In force.]
- Computer Misuse Act [In force.]
- Copyright Act [In force.]
- Patents Act [In force.]
- Competition Act [In force.]
- Cybersecurity Act [In force.]
- Protection from Online Falsehoods and Manipulation Act [In force.]
- Road Traffic Act [In force.]
- The Digital Economy Partnership Agreement [In force.]
-
expand_more
Wider AI context
- Singapore is a party to the OECD's AI principles. See the OECD's Policy Observatory.
- Singapore participated in the 2023 U.K. AI Summit, which led to the Bletchley Declaration.
- Singapore also adopted UNESCO's Recommendation on the Ethics of AI.
- Based on Singapore's National AI Strategy, the city-state aims to be a global hub for AI, thereby generating economic gains and improving lives. A key tenet in Singapore's AI policy is that its citizens understand AI tech and its workforce attains the necessary competencies to participate in an AI economy.
- The Singapore VerifyAI initiative, known as the "crosswalk" was unveiled at the inaugural USSingapore Dialogue on Critical and Emerging Technologies. The crosswalk links IMDA's AI Verify with the U.S. National Institute of Standards and Technology's AI Risk Management Framework.
- See the Primer to the Model AI Governance Framework.
- See the Trusted Data Sharing Framework.
- See the Guide to Job Redesign in the Age of AI.
- Complementing the Model Framework and ISAGO are two volumes of a Compendium of Use Cases that show "how local and international organisations across different sectors and sizes implemented or aligned their AI governance practices with all sections of the Model Framework."
- Singapore has signed a variety of agreements to cooperate on AI innovation and safety with Australia and the EU AI Office.
- Singapore signed the AI Action Summit joint statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet.
-
-
expand_more
South Korea
-
expand_more
Specific AI governance law or policy
The South Korean legislature passed the AI Basic Act and looks to beef up regulatory guidance in 2025. As the second-enacted national AI comprehensive regulatory law, the legislation has been analyzed through a comparison with the EU AI Act.
While there are some similarities, as it contains provisions for greater transparency, prior notification to users, labeling of generative AI outputs and specific measures for high-risk AI systems, the law differs in important regards, especially in the types of AI systems targeted and the blanket obligations regardless of the place in the AI value chain. Read the IAPP's full analysis.
South Korea has numerous policy initiatives regarding AI and technology under its National Strategy for AI, including the AI Research and Development Strategy, the Data Industry Activation Strategy, and the System Semiconductor Strategy. The nation intends to leverage its high education level, widespread acceptance of new technology and preeminent IT infrastructure to implement these initiatives.
Additionally, in August 2023, the Personal Information Protection Commission published guidance for the safe use of personal information in the age of AI.
-
expand_more
Relevant authorities
-
expand_more
Other relevant laws and policies
- Personal Information Protection Act [In force.]
- Monopoly Regulation and Fair Trade Act [In force.]
- Copyright Act [In force.]
- Protection and Use of Location Information Act [In force.]
- Consumer Protection in Electronic Commerce Act [In force.]
- Promotion and Communications Network Utilization and Information Protection Act [In force.]
- Credit Information Use and Protection Act [In force.]
- Product Liability Act [In force.]
-
expand_more
Wider AI context
- South Korea adopted UNESCO's Recommendation on the Ethics of AI.
- The Digital New Deal was created by the South Korean government to promote both educational and industrial efforts on AI opportunities.
- See the AI Open Innovation Hub.
- South Korea published the Artificial Intelligence Privacy Risk Assessment and Management Model draft to provide guidance for companies looking to develop AI and a guide to synthetic data.
- South Korea committed themselves to building out their AI infrastructure, including the world’s highest-capacity AI data center.
- South Korea has blocked DeepSeek in various ministries.
- South Korea signed the AI Action Summit joint statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet.
-
-
expand_more
Taiwan
-
expand_more
Specific AI governance law or policy
Taiwan has embraced a holistic approach to developing an AI environment. The government released a draft of its AI Basic Act, which prioritizes innovation and technological advancement. The act largely follows a risk-based approach to regulating AI, while also emphasizing data protection, consumer rights and transparency requirements. The following resources have been issued for policy guidance:
- National Science and Technology Council's policy discussing AI Innovation.
- AI Taiwan Action Plan.
- AI Taiwan Action Plan 2.0.
- 2022 AI-Readiness Assessment Report.
-
expand_more
Relevant authorities
- Fair Trade Commission
- NSTC, previously the Ministry of Science and Technology
- Ministry of Health and Welfare
- Executive Yuan of Taiwan
- Ministry of Digital Affairs
- Industrial Technology Research Institute
- Taiwan AI Center of Excellence
-
expand_more
Other relevant laws and policies
- Personal Data Protection Act [In force.]
- Fair Trade Act [In force.]
- Cybersecurity Management Act [In force.]
- Company Act [In force.]
- Child and Youth Sexual Exploitation Prevention Act [In force.]
- Copyright Act [In force.]
- Patent Act [In force.]
- Freedom of Government Information Law [In force.]
- Financial Technology Development and Innovative Experimentation Act [In force.]
- FinTech Regulatory Sandbox Guidance
- MoST AI Technology Research and Development Guidelines
- Guidelines on the use of Generative AI [Draft.]
-
expand_more
Wider AI context
- See the Digital Nation and Innovative Economic Development Program.
- See the 5+2 Industrial Innovation Plan.
- See Smart Taiwan 2030.
- See Taiwan AI Labs.
- See the country's Forward-looking Infrastructure Development Program.
- See the Unmanned Vehicle Technology Innovation Sandbox.
- In November 2024, the U.S. Department of Commerce ordered Taiwan Semiconductor Manufacturing to cease shipping microchips with certain specifications often used in AI applications after finding one such chip in a Huawei AI processor.
- Taiwan was one of many countries to ban the use of DeepSeek because of its cross-border exportation of data to China, which violates Taiwan’s Ministry of Digital Development guidelines for the safe use of generative AI.
-
-
expand_more
United Arab Emirates
-
expand_more
Specific AI governance law or policy
In 2017, the UAE became the first country to establish an AI ministry. The nation's Council for AI and Blockchain will oversee policies that promote an AI-friendly ecosystem, advance AI research and accelerate collaboration between public and private sectors. The UAE is poised to become a hub for AI research, collaboration, innovation and education per its National Strategy for AI. The following resources have been issued for policy guidance:
- National Program for AI.
- AI Ethics Principles and Guidelines.
- Generative AI guide.
- AI coding license.
- AI System Ethics Self-Assessment Tool.
- AI Adoption Guideline in Government Services.
- The Dubai International Financial Centre's Regulation 10 on Processing Personal Data Through Autonomous and Semi-Autonomous Systems [In force.]
In October 2024, the UAE’s cabinet approved its International Stance on Artificial Intelligence Policy. The policy highlights five priorities, emphasizing involvement in international AI initiatives for the safe and ethical development of AI technologies.
-
expand_more
Relevant authorities
-
expand_more
Other relevant laws and policies
- Personal Data Protection Law [In force.]
- Central Bank Rulebook [In force.]
- Federal Decree Law on Combating Rumours and Cybercrimes [In force.]
- Penal Code [In force.]
- Federal Law concerning the Regulation of Competition [In force.]
- Federal Law on Consumer Protection [In force.]
- Federal Decree Law on Copyrights and Neighbouring Rights [In force.]
- Health Data Law [In force.]
- Federal Law on the Regulation and Protection of Industrial Property Rights [In force.]
- ADGM's Data Protection Regulations 2021 [In force.]
- Federal Law on the Civil Transactions Law of the United Arab Emirates State [In force.]
- Minister of AI, Digital Economy and Remote Work Applications Office's AI Ethics Principles and Guidelines
-
expand_more
Wider AI context
- The UAE participated in the 2023 U.K. AI Summit, which led to the Bletchley Declaration.
- The UAE also adopted UNESCO's Recommendation on the Ethics of AI.
- Abu Dhabi hosts a growing startup community, advanced machine-learning facilities and educational institutions, like Mohamed bin Zayed University which teamed up with IBM to open the AI Center of Excellence, in addition to a new supercomputing resource for complex algorithms and large datasets. With this infrastructure in place, the UAE hopes to deploy AI in priority sectors such as energy and transportation.
- The National Program for AI published a Deepfake Guide in 2021.
- The UAE AI and Robotics Award for Good aims to "encourage research and applications of innovative solutions in (AI) and robotics to meet existing challenges in the categories of health, education and social services."
- See the country's Guidelines for Financial Institutions adopting Enabling Technologies.
- See the AI Hardware Infrastructure Report.
- As part of a larger agreement of cooperation on AI, France and the UAE agreed on a deal to invest 30-50 billion euros for the construction of a datacenter in France.
- In September 2024, the Biden administration met with the UAE’s National Security advisor to develop principles of cooperation between the two nations. A statement from the White House emphasized the importance of deepening ties to “fully realize the benefits of AI and technology."
- UAE signed the AI Action Summit joint statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet.
-
-
expand_more
United Kingdom
-
expand_more
Specific AI governance law or policy
The U.K. delays plans to regulate AI. The proposed bill was due in December 2024 but is not expected before summer. A draft AI regulation from the previous session, however, was reintroduced into the House of Lords. This follows a broader trend in Europe of looking to focus less on regulation but maintaining a competitive edge in AI. Currently, the U.K. relies on existing sectoral laws to impose guardrails on AI systems. The following resources are available for policy guidance:
- A pro-innovation approach to AI regulation.
- Algorithmic Transparency Recording Standard Hub.
- AI Standards Hub, a new U.K. initiative dedicated to the evolving and international field of standardization for AI technologies.
- Guide to using AI in the public sector by the U.K. government.
- The Government Digital Service and the Office for AI's guide on understanding AI ethics and safety.
- The Centre for Data Ethics and Innovation's AI Governance research report.
- Guidance on the AI auditing framework from the Information Commissioner's Office.
- ICO and Alan Turing Institute's Explaining decisions made with AI.
- The U.K. released the AI Playbook, offering guidance to departments and public sector organizations for the safe and effective use of AI.
- Data (Use and Access) Bill clarifies use of data in models and has specific provisions for automated decision-making.
- The U.K. Government released an AI Opportunities Action Plan, signaling its intent to support AI development domestically.
- The U.K. seeks to build a frontier model champion to compete with foreign companies, like OpenAI.
-
expand_more
Relevant authorities
-
expand_more
Other relevant laws and policies
- Equality Act [In force.]
- U.K. General Data Protection Regulations and Data Protection Act [In force.]
- Consumer Protection Act [In force.]
- Financial Services and Markets Act [In force.]
- Consumer Rights Act [In force.]
- National Security and Investment Act [In force.]
- Copyright, Designs and Patents Act [In force.]
- Advanced Research and Invention Agency Act [In force.]
- National Cyber Security Centre's Assessing intelligent tools for cyber security [In force.]
- Artificial Intelligence (Regulation) Bill [Draft.]
-
expand_more
Wider AI context
- The U.K. is a party to the OECD's AI principles. See the OECD's Policy Observatory.
- In 2023, the country hosted the AI Summit, which led to the Bletchley Declaration.
- The U.K. also adopted UNESCO's Recommendation on the Ethics of AI.
- As part of the G7, the U.K. endorsed the 11 Hiroshima Process International Guiding Principles for Advanced AI systems.
- Specific action items include launching a national AI research and insights program, developing a diverse AI workforce, enabling better data availability, creating a national strategy for AI in health and social care, applying AI systems to climate change mitigation, piloting an AI standards hub to coordinate with global AI standardization, and developing a cross-government standard for algorithmic transparency.
- The Centre for Data Ethics and Innovation published a Roadmap to an Effective AI Assurance Ecosystem, which is also part of the National AI Strategy. Further, the CDEI created an AI Assurance Guide as a companion to the roadmap.
- See the U.K. AI Safety Institute.
- Along with the U.S., the U.K., Israel, and the EU have has signed onto the Council of Europe's Framework Convention on AI and human rights, democracy, and the rule of law.
- The U.K. probes DeepSeek for security implications.
- Qatar and the U.K. will increase collaboration on AI research.
-
-
expand_more
United States
-
expand_more
Specific AI governance law or policy
The U.S. has been active in providing guidance to government organizations and private businesses while introducing legislation to target specific issues, such as deepfakes or discrimination. President Donald Trump quickly revoked the previous executive order on AI and signed a new executive order detailing his administration’s policy on AI. The new order seeks to remove barriers to AI development and stimulate innovation. Shortly thereafter, the new administration unveiled Stargate, a USD500 billion private endeavor. Other actions taken by the Trump administration include the issue of two Office of Management and Budget memoranda, replacing earlier Biden Administration-era memoranda. One affects federal use of AI and the other AI procurement. Later, President Trump signed an executive order around AI education and workforce development.
The U.S. has been active in many of the multilateral agreements on AI as well, for example by signing onto the Council of Europe's Framework Convention and promoting rulemaking at the U.N. At the state level, several bills have been passed, such as the Colorado AI Act, and several bills that regulate AI in specific sectors, such as House Bill 3733 in Illinois. While not an exhaustive list, the following federal laws and policies could place a compliance or regulatory burden on private businesses:
- Acts and bills:
- AI Training Act [In force.]
- National AI Initiative Act (Division E, Sec. 5001) [In force.]
- TAKE IT DOWN Act [Passed.]
- Create AI Act [Draft.]
- NO FAKES Act of 2025 [Draft.]
- Nonbinding frameworks:
- Government initiatives:
- Voluntary Commitments from Leading AI Companies to Manage the Risks Posed by AI
- TTC Joint Roadmap on Evaluation and Measurement Tools for Trustworthy AI and Risk Management
- Congressional AI effort of Sen. Charles E. Schumer, D-N.Y.
- National Security Commission on AI
- Bipartisan legislative framework for AI announced by U.S. Sens. Richard Blumenthal, D-Conn., and Josh Hawley, R-Mo.
- The House joins the Senate in publishing a task force report on artificial intelligence
- The House joins the Senate in publishing a task force report on artificial intelligence AI in Government Act (Division U, Sec. 101) [In force.]
- Acts and bills:
-
expand_more
Relevant authorities
-
expand_more
Other relevant laws and policies
- FTC Act, Section 5 [In force.]
- Fair Credit Reporting Act [In force.]
- Equal Credit Opportunity Act [In force.]
- Title VII of the Civil Rights Act [In force.]
- Americans with Disabilities Act [In force.]
- Age Discrimination in Employment Act [In force.]
- Fair Housing Act [In force.]
- Genetic Information and Nondiscrimination Act [In force.]
-
expand_more
Wider AI context
- The U.S. is a party to the OECD's AI principles. See the OECD's Policy Observatory.
- The U.S. participated in the 2023 U.K. AI Summit, which led to the Bletchley Declaration.
- The U.S. also adopted UNESCO's Recommendation on the Ethics of AI.
- As part of the G7, the U.S. endorsed the 11 Hiroshima Process International Guiding Principles for Advanced AI systems.
- In general, the U.S. approach to AI governance has been slow and incremental, seeking to preserve civil and human rights for Americans throughout AI deployment, as well as mobilize international collaboration which upholds democratic values and mutual advancement.
- See the U.S. AI Safety Institute.
- U.S. Senate Committee on the Judiciary's Subcommittee on Privacy, Technology and the Law held a hearing on the legislation of AI.
- The Bipartisan Senate Working Group on AI, led by Sen. Chuck Schumer, D-N.Y., has released a roadmap for AI policy. This document highlights the need to ensure enforcement of existing rules, tackle current threats not covered by legislation, such as the use of deepfakes in elections, prepare for long-term threats of AI use, and create a federal privacy legal framework.
- The Singapore VerifyAI initiative known as "crosswalk" was unveiled at the inaugural U.S.-Singapore Dialogue on Critical and Emerging Technologies. The crosswalk links IMDA's AI Verify with the U.S. NIST's AI Risk Management Framework.
- Along with the U.S., the U.K., Israel and the EU have has signed onto the Council of Europe's Framework Convention on AI and human rights, democracy and the rule of law.
- The U.S. has been seeking to decouple its AI efforts from China, while looking to stymie China’s AI industry by restricting the chips available for Chinese companies to import. It was a surprise for U.S. companies when DeepSeek, a Chinese foundational model developer, released a model that was competitive with those from top U.S. developers. This development raised questions about whether the export restrictions were effective. To answer these questions, the U.S. Select Committee on the CCP released a report. This could mean changes to the AI Diffusion Rule, which curbs access to AI chips, could be coming.
- State efforts to restrict or regulate AI are in full force, including state attorneys general, such as Texas’ Ken Paxton taking action against DeepSeek.
-
-
expand_more
Additional AI resources