Resource Center / Resource Articles / Global AI Governance Law and Policy: United Arab Emirates
Global AI Governance Law and Policy: United Arab Emirates
This article, part of a series co-sponsored by HCLTech, analyzes the laws, policies, and broader contextual history and developments relevant to AI governance in the United Arab Emirates. The full series can be accessed here.
Published: December 2025
Contributors:

Navigate by Topic
The United Arab Emirates has steadily built a national framework for artificial intelligence over the past decade with a focus on integrating AI into government services, economic planning, and infrastructure. In 2017, the UAE launched its first AI strategy and appointed a Minister of State for Artificial Intelligence, Digital Economy, and Remote Work Applications Office, initiating a government-led effort to explore how AI could be applied across public services and national development.
Since then, the UAE has expanded its efforts through the UAE National Strategy for Artificial Intelligence 2031, which outlines goals for integrating AI into sectors such as health care, education, and transportation. It is supported by initiatives that build technical capacity, attract investment, and promote responsible innovation.
The UAE's level of investment — both internally and via strategic bilateral agreements with global players and governments, such as the U.S.-UAE AI Acceleration Partnership — demonstrates the government's commitment to AI. It positions AI not merely as a computing resource but as a critical national asset.
While there is currently no dedicated AI legislation in the UAE, other than one provision of the Dubai International Financial Center Data Protection Law discussed below, the government has introduced various mechanisms to manage the development and use of AI. These include ethical guidelines, sector-specific standards, and institutional bodies such as the Artificial Intelligence and Advanced Technology Council. In parallel, the UAE has intensified its engagement in cross-border AI standards harmonization efforts, actively contributing to international forums such as ISO/IEC JTC 1/SC 42 Artificial Intelligence and cooperating with organizations like the Organisation for Economic Co-operation and Development and UNESCO on AI ethics and governance frameworks. This reflects the nation's strategic intent to align domestic AI governance with evolving global norms and reinforce trust in responsible AI adoption.
The UAE's approach to AI governance remains policy-driven with legal oversight primarily addressed through broader frameworks such as data protection and cybersecurity. Governance continues to evolve through a combination of strategic initiatives, institutional development, and non-binding guidance while formal regulatory structures specific to AI are still taking shape.
From a governance perspective, the UAE's approach represents a hybrid regulatory model — one that blends state-led strategic direction with decentralized implementation. This model is increasingly viewed as a "sandbox state" approach, where policy frameworks precede and inform eventual statutory codification. This contrasts with the EU’s prescriptive AI Act and the U.S.'s sectoral self-regulation, underscoring the UAE's pragmatic and innovation-friendly stance.
The UAE's engagement with AI began in 2017 when the federal government launched the UAE Strategy for Artificial Intelligence and appointed the world's first minister of state for AI. This marked a shift in national policy toward embedding AI into public administration and economic planning. The strategy was framed as part of the broader UAE Centennial 2071 vision, which aims to position the country as a global leader in innovation and advanced technology. It also built upon the UAE's smart government and city initiatives, particularly the Smart Dubai 2021 Strategy, which laid the foundational digital infrastructure and governance frameworks that later enabled large-scale AI implementation across public services and urban ecosystems.
Early moves to establish a minister of AI pre-empted most OECD jurisdictions and positioned the country among the first nations to institutionalize AI governance at the cabinet level. This foresight reflects an understanding that AI policy cannot be confined to digital transformation portfolios alone and must be integrated with national competitiveness, education, and industrial policy.
The country's initial strategy focused on improving government performance, reducing costs, and enhancing service delivery through AI. It identified priority sectors, including transportation, health care, education, energy and space, and introduced mechanisms such as the UAE Council for Artificial Intelligence and Blockchain to oversee implementation. The strategy also called for the development of AI capabilities within government entities, including the appointment of a chief AI officer across ministries and federal bodies.
Over time, the UAE has introduced AI readiness and maturity assessment frameworks for federal and local entities, requiring ministries and agencies to track their AI adoption performance using defined key performance indicators aligned with the Artificial Intelligence Strategy 2031. This systematic approach ensures measurable progress toward embedding AI into decision-making, operations, and citizen services.
The UAE expanded its ambitions through the Artificial Intelligence Strategy 2031, which set out eight strategic objectives. These include building a reputation as an AI destination, developing a fertile ecosystem for AI innovation, integrating AI into customer-facing services, and ensuring strong governance and ethical oversight. The strategy also emphasized the importance of attracting talent, supporting research, and creating the infrastructure needed to support AI deployment. These objectives are directly aligned with the UAE's broader national competitiveness strategy and Digital Economy Vision, reinforcing how AI acts as a key lever for gross domestic product diversification beyond hydrocarbons and for advancing national productivity and sustainability goals.
To support these goals, the UAE launched several initiatives, including the establishment of the National Program for Artificial Intelligence, the Mohamed bin Zayed University of Artificial Intelligence, and partnerships with international technology firms and academic institutions. The MBZUAI has played a pivotal role in nurturing indigenous AI talent and advancing applied research in frontier areas, such as large language models, climate-tech AI, and sustainable computing, strengthening the UAE's sovereign capability and positioning it as a regional thought leader in responsible AI innovation. These efforts have been supplemented by targeted investments in AI infrastructure, such as data centers and cloud platforms, and the development of open-source AI models like Falcon 3.
In 2024, the UAE strengthened its institutional framework by establishing the Artificial Intelligence and Advanced Technology Council in Abu Dhabi. The council is tasked with developing policies and strategies related to AI research, infrastructure, and investment. It plays a coordinating role across government and industry and is intended to support Abu Dhabi's positioning as a regional hub for advanced technology. The council also reflects the UAE's growing engagement in international AI governance, including participation in multilateral forums and global standard-setting initiatives.
A notable policy theme emerging through these frameworks is the UAE's emphasis on "human-centric AI," ensuring that technological advancement remains aligned with ethical, transparent, and inclusive development. This approach resonates with global responsible AI principles, embedding fairness, accountability, privacy, and security at the core of the nation's AI transformation journey.
Although the UAE has not enacted a dedicated AI law, it has issued ethical guidelines and charters to guide responsible development. The UAE Charter for the Development and Use of AI, released in 2024, sets out twelve principles covering human oversight, data privacy, transparency, and fairness. They inform future regulatory developments and guide both public and private sector actors.
Legal oversight of AI is currently limited and primarily addressed through broader regulatory frameworks that intersect with AI-related activities. Federal Decree-Law No. 45 of 2021 Regarding the Protection of Personal Data includes provisions relevant to automated processing. The Dubai International Financial Centre has amended its Data Protection Law to address AI-related transparency, governance and accountability. Although these measures form part of broader legal frameworks, they do not constitute standalone, comprehensive AI legislation like the EU AI Act.
From a comparative law standpoint, the UAE's incremental legislative layering — where AI governance is diffused through data, cyber, and consumer protection regimes — illustrates a functional distributed regulatory model. Such a model may risk regulatory fragmentation, especially in cross-sectoral AI use cases, such as generative AI in finance or health care, that fall between jurisdictional mandates.
The country's approach to AI governance has therefore been shaped more by policy frameworks and institutional developments than by legal regulation. It reflects a model focused on enabling innovation while gradually building the structures needed to address emerging risks and regulatory challenges.
At present, the UAE does not have dedicated legislation exclusively governing AI. Oversight falls across various government agencies and bodies, such as the Artificial Intelligence Office of the Ministry of Health and Prevention for the health care sector, the Telecommunications and Digital Government Regulatory Authority, and the Ministry of State for Artificial Intelligence, Digital Economy and Remote Work Applications Office, also known as the Ministry of AI. Additionally, the UAE Data Office is empowered to investigate data breaches linked to AI systems; the UAE Council for Artificial Intelligence has been established as a specialized committee to strengthen the governance and coordination of AI initiatives across government entities.
Furthermore, as AI becomes more embedded in decision-making and automation, the concept of AI liability is emerging, raising important questions about how existing tort, contract, and product safety laws may be interpreted to address harms or errors arising from autonomous systems. The country's evolving regulatory approach will likely continue to refine these accountability mechanisms, balancing innovation with the protection of individual and societal interests.
In practice, many UAE organizations navigate a mosaic of overlapping obligations — data privacy under the PDPL, content regulation under media laws, and cyber risk under Federal Law No. 34 of 2021. For corporate compliance officers, this necessitates an AI compliance-by-design mindset, embedding algorithmic audit trails, explainability logs, and ethical impact assessments from procurement to deployment. Such internal governance measures are likely to form the baseline for any forthcoming federal AI law.
-
expand_more
Personal Data Protection Law
The UAE Personal Data Protection Law is the country's first comprehensive federal data protection framework, enacted in 2022 to regulate the collection, processing, and storage of personal data across private sectors. It is modelled on the EU General Data Protection Regulation, incorporating core principles such as lawfulness, fairness, transparency, purpose limitation, data minimization and accountability.
Although the PDPL does not contain standalone provisions dedicated to AI, it has implications for AI systems, particularly those involving automated decision-making and personal data processing. Article 18 of the PDPL addresses automated processing, granting data subjects the right to object to decisions made solely through automated means and to request human intervention. The law also includes broader data subject rights, such as access, correction, and erasure, that apply to AI-driven systems handling personal data. For example, Article 15 allows individuals to request the correction or deletion of inaccurate or unlawfully processed data, which may arise in the context of machine learning models trained on personal datasets.
Additionally, the PDPL requires organizations to conduct an assessment of the impact of personal data protection in cases where processing is likely to result in a high risk to the privacy and confidentiality of individuals. This obligation is particularly relevant to AI systems that involve profiling, large-scale processing of sensitive data, or decisions with legal or similarly significant effects.
At the time of publication, the PDPL is in force but not yet enforceable. Enforcement will begin following the expiry of a six-month grace period granted to organizations once the implementing regulations are published. These regulations, which are expected to provide further clarity on compliance obligations, have not yet been released. Additionally, the UAE Data Office, designated as the federal data protection regulator, is not yet fully operational.
-
expand_more
Consumer Protection Law
Federal Law No. 15 of 2020 on Consumer Protection, as amended along with its implementing regulations, restricts the use of consumer data collected by businesses to only the execution of the relevant transaction. Any additional use requires informed consumer consent. This includes use of the data for profiling, analysis, pattern recognition and predictive purposes, all areas where AI may play a role.
-
expand_more
Cybercrime Law
Federal Law No. 34 of 2021 on Countering Rumours and Cybercrime includes various broad provisions that define offenses relating to the use of computing systems and information networks. These offenses include invasion of privacy and unauthorized access to records and information. An article within the law criminalizes processing activities such as acquisition, disclosure and modification of personal data without authorization. To the extent that AI models are trained on data from various sources, it is important to consider the provenance of such data and the legal bases on which it is legitimately shared and processed to avoid inadvertent exposure under the Cybercrime Law.
From a policy-risk perspective, the Cybercrime Law acts as an implicit AI accountability statute, given its broad language around data misuse, defamation, and misinformation. The absence of intent thresholds in several provisions means that developers and deployers of generative AI tools face strict exposure, even where harm is unintentional.
The Cybercrime Law also criminalizes the online publication of media content that infringes the country's media laws, spreads "fake news" including the use of bots to disseminate inaccurate information, harms the interests of the state or attacks foreign states, and contains misleading advertising. The use of generative AI in creative processes or for amplifying messages can pose serious criminal liability risks if not carefully managed.
Defamatory content is also a criminal offense as well as a civil offense; the truth of a statement is not necessarily a defense to a defamation allegation. The test is not whether the statement is true, but rather whether it exposes the target to contempt and ridicule and whether it was reasonable to make the statement. On that basis, any generative AI tools that create images or depictions of real or identifiable characters or describe any real or identifiable persons risk committing defamation if not moderated.
-
expand_more
Financial Free Zones
The UAE's legal system is composed of federal laws that apply nationwide; emirate-level legislation, such as in Dubai and Abu Dhabi; and regulations issued by various free zones. Among these are the DIFC and the Abu Dhabi Global Market, two financial free zones with independent legal frameworks including their own data protection regimes. These jurisdictions are excluded from the scope of the PDPL and operate under separate supervisory authorities.
The DIFC has introduced targeted provisions within its data protection framework to address the use of AI and autonomous systems. Regulation 10 of the DIFC Data Protection Regulations governs the use of autonomous and semi-autonomous systems in personal data processing, including AI, and sets out specific obligations for organizations deploying such technologies.
Regulation 10 establishes a set of general design principles for AI systems, requiring that they be developed and operated in accordance with ethical, fair, transparent, secure, and accountable standards. Systems must be able to process personal data only for purposes that are either human-defined or human-approved or for purposes defined by the system itself, strictly within human-defined constraints. These principles are intended to mitigate bias, ensure explainability, protect data confidentiality, and establish clear lines of responsibility.
The DIFC's Regulation 10 is one of the first subnational AI governance instruments globally, pre-dating most G20 jurisdictions. Its principles echo the OECD's 2019 AI Recommendations and the ISO/IEC 42001 standard under development, suggesting the DIFC's intent to align with international regulatory interoperability frameworks.
The regulation also imposes detailed notice and transparency requirements. Where autonomous systems are used in applications or services that process personal data, deployers and operators must provide clear and explicit notice to users at the point of initial use. This notice must describe the nature of the system, including whether it operates independently of human direction, and explain its impact on individual rights.
Entities engaging in high-risk processing must appoint an autonomous systems officer, with responsibilities comparable to those of a data protection officer, and ensure that their system complies with any audit and certification requirements established by the DIFC commissioner of data protection.
In support of these obligations, a certification framework has been developed to assist organizations in demonstrating compliance. This framework is intended to align with international standards and provide a basis for future audit and certification requirements that may be established by the DIFC commissioner of data protection.
-
expand_more
Health care sector
Emirate-level authorities have taken steps to develop AI-specific policies in the health care sector. The Dubai Health Authority issued a policy framework for the use of AI in health care services, outlining requirements for transparency, accountability, and patient safety in AI-enabled systems used across Dubai's health infrastructure. Similarly, the Abu Dhabi Department of Health published its Policy on the Use of Artificial Intelligence in the Healthcare Sector in 2018. The policy applies to all licensed health care providers, insurers, researchers, and pharmaceutical manufacturers operating in Abu Dhabi and sets out principles for responsible AI adoption, including risk management, data governance, and patient safety.
Importantly, both frameworks anticipate the evolution of AI oversight mechanisms, including potential certification, validation, and audit requirements for AI-based medical devices and software-as-a-medical-device solutions. These forward-looking provisions align with global regulatory trends seen under frameworks such as the U.S. Food and Drug Administration's AI/machine learning-based medical device guidance and the EU Medical Device Regulation, underscoring the UAE's intent to harmonize its health care AI governance with international best practices.
The health care sector shows how the UAE is turning AI policy into practical regulation. Abu Dhabi and Dubai increasingly align AI approval with existing medical-device and data-exchange processes, creating an implicit layer of oversight without new legislation. This integration, comparable in spirit to the EU's Medical Device Regulation, allows innovation to progress while maintaining patient-safety assurance and regulatory accountability.
The UAE Charter for the Development and Use of Artificial Intelligence
The UAE Charter for the Development and Use of Artificial Intelligence, issued in June 2024 by the minister of AI, serves as a cornerstone for responsible AI governance in the country. It aligns closely with the UAE Strategy for Artificial Intelligence 2031, emphasizing human well-being, safety, privacy, and transparency in the design and deployment of AI systems. The charter articulates twelve guiding principles to ensure ethical and inclusive AI implementation grounded in robust governance, accountability, and compliance with both international and domestic laws.
While the charter is not a binding legal instrument, it plays a strategic role as the ethical foundation for all sectoral AI regulatory initiatives in the UAE. Much like the EU AI Act's horizontal risk-based model, it provides a unifying ethical and governance framework that ensures coherence and consistency across diverse, highly regulated sectors, including banking, finance, public administration, media, mobility, and health care. This approach reinforces the UAE's commitment to responsible, human-centric AI while enabling innovation across the national digital ecosystem.
The charter's soft-law nature is equally significant. By remaining flexible rather than prescriptive, it enables adaptive regulatory evolution without stifling innovation — a concept increasingly endorsed in the Gulf Cooperation Council's emerging AI legal culture.
Smart Dubai AI Ethics Principles
In 2019, the Smart Dubai Office, now part of Digital Dubai, released a set of AI Ethics Principles and Guidelines to support the responsible development and deployment of AI across the emirate. These principles are intended to guide public and private sector organizations in ensuring that AI systems are designed and used in ways that promote fairness, accountability, transparency, and human benefit. The framework includes commitments to mitigate bias, ensure explainability, and uphold data security and individual rights. The guidelines emphasize that individuals should be able to challenge significant automated decisions and that accountability for AI outcomes must rest with human actors, not the systems themselves. The initiative also introduced a self-assessment tool to help organizations evaluate the ethical performance of their AI systems.
Whitepaper on the Responsible Metaverse Self-Governance Framework
The UAE's AI office, working with the Dubai Department of Economy and Tourism, has released a whitepaper on the Responsible Metaverse Self-Governance Framework, which lays out nine self-regulatory principles to help shape the metaverse's ethical and responsible growth. While it does not directly regulate AI, the whitepaper treats AI as a key building block of the metaverse and weaves AI governance considerations into the broader framework.
While there is no single comprehensive agentic AI law, the existing framework — particularly Regulation 10 of the DIFC Data Protection Regulations, the UAE Charter for the Development and Use of Artificial Intelligence and the establishment of the minister of AI — reflects the UAE's emerging approach to regulating autonomous AI systems. Sector-specific policies such as the emirate health authorities' policies on AI in health care and Law No. 9 of 2023 Regulating the Operation of Autonomous Vehicles in the Emirate of Dubai appear to reinforce this evolving framework, requiring AI systems, including those with autonomous or agentic capabilities, to operate within legal and ethical boundaries.
Importantly, the UAE is beginning to differentiate between autonomous and agentic AI in its policy discussions. Autonomous AI refers to self-operating systems functioning within predefined parameters, while agentic AI denotes systems capable of adaptive decision-making, learning, and acting on inferred intent. To manage the risks and governance challenges posed by such advanced models, the country is increasingly leveraging regulatory sandboxes, such as the Abu Dhabi Global Market's digital sandbox, to enable controlled experimentation and supervised validation of agentic AI applications in sectors like finance, logistics, and mobility.
The UAE's approach to AI governance is evolving from foundational frameworks to more tangible regulatory mechanisms. Since launching the UAE Charter for the Development and Use of Artificial Intelligence, the country established the world's first AI-enabled Regulatory Intelligence Office within the Cabinet in April 2025. The office connects federal and local laws with judicial rulings, executive procedures, and public services through a centralized AI system. This system monitors the real-world impact of laws and suggests updates based on large-scale data analysis. Officials describe this as a shift toward AI-driven regulation, aimed at accelerating legislative processes and improving responsiveness. While ambitious, experts emphasize the need for human oversight and safeguards against bias and reliability risks.
This innovation positions the UAE at the frontier of AI for regulation, not merely the regulation of AI. However, such use of AI in rule-making triggers complex jurisprudential questions about delegated cognition. For example, to what extent can predictive analytics influence legislative drafting without eroding democratic legitimacy? The legal tradition will likely evolve mechanisms to ensure human interpretive supremacy within algorithm-assisted governance.
Additionally, the Dubai State of AI Report, published in April 2025, outlines the city's commitment to shaping international AI governance through global forums and initiatives like the Dubai AI Acceleration Taskforce, which invites groups to co-develop frameworks.
Moreover, the Dubai Centre for Artificial Intelligence has introduced the "Dubai AI Seal," a verification system designed to accelerate the growth of the emirate's AI industry. Through this initiative, legally operating AI businesses of any size can submit their details via an online application process. Each application is assessed by the DCAI team using the Dubai AI Business Activity Classification System.
Approved businesses receive a personalized Dubai AI Seal, which includes a tier ranking and a unique serial number at no cost. The seal features six tiers that reflect the level of economic contribution: S, A, B, C, D, and E. Tier S represents the highest impact on Dubai's AI economy. The program aims to strengthen business credibility, protect public and private entities from irrelevant suppliers and AI-washing, and streamline access to trusted AI providers in Dubai.
The UAE Media Council also illustrates practical AI integration through its agreement with Presight to launch the Unified Media AI and Analytics Platform, designed to assess and regulate media content prior to publication. The council has highlighted risks associated with AI misuse, warning that improper use of image generation tools violates media content standards. Depicting national symbols or public figures without official approval is unlawful; AIgenerated content that spreads misinformation, hate speech, defamation, or undermines societal values constitutes a media offense. Penalties range from Dhs100,000 (approximately USD27,000) to Dhs1 million (approximately USD27 million), depending on severity.
As the UAE continues to build on its Artificial Intelligence Strategy 2031, the focus is shifting from strategic planning to operational execution, emphasizing the integration of AI into public services, infrastructure expansion, and the institutionalization of governance mechanisms that ensure responsible deployment. The next phase is expected to prioritize workforce development, cross-sector collaboration, and international partnerships to consolidate the country's global leadership in AI innovation.
At GITEX Global 2025, the Ministry of Human Resources and Emiratisation unveiled Eye, an AI-powered system to automate work permit processing. Leveraging intelligent document verification for passports and academic credentials, the system minimizes manual intervention and accelerates approvals — an example of how AI agents are being operationalized within core government functions to enhance efficiency and reduce costs across the labor ecosystem.
Looking ahead, the UAE's AI agenda includes expanding sovereign digital infrastructure, exemplified by the Stargate supercomputing cluster in Abu Dhabi, which will host large-scale national AI models and bolster computational capacity. The country is also heavily investing in upskilling programs, strategic partnerships, and the operationalization of responsible AI frameworks through initiatives such as the UAE Charter for the Development and Use of Artificial Intelligence and emirate-level ethical guidelines.
As AI becomes increasingly embedded in governance, commerce, and social systems, the UAE's next steps will likely involve refining regulatory coherence, institutionalizing ethical accountability, and ensuring alignment with evolving international AI governance norms. Key emerging priorities include modular AI legislation targeting high-risk systems, the Gulf Cooperation Council's Guiding Manual for the Ethics of AI Use, human capital and regulator upskilling, and cross-border data governance alignment with the GDPR and Asia-Pacific Economic Cooperation standards.
The UAE's AI journey reflects a deliberate evolution from policy vision to structured governance. By embedding responsible AI principles across institutional frameworks and advancing measurable AI maturity models, the country is setting global benchmarks for trustworthy, human-centric AI. Its future trajectory points toward a hybrid model, balancing innovation with robust ethical oversight, interoperability with global AI regulations, and transparency through governance audits and digital assurance. The UAE's growing participation in international AI forums, such as the World Government Summit and UNESCO's Policy Dialogue on AI Governance, reinforces its commitment to shaping global AI norms.
Ultimately, the UAE's approach is transforming AI governance into an enabler of trust, economic diversification and responsible digital transformation, creating a resilient foundation for an inclusive, safe, and globally respected AI-driven economy.
Articles in this series, co-sponsored by HCLTech, dive into the laws, policies, broader contextual history and developments relevant to AI governance across different jurisdictions. The selected jurisdictions are a small but important snapshot of distinct approaches to AI governance regulation in key global markets.
Each article provides a breakdown of the key sources and instruments that govern the strategic, technological and compliance landscape for AI governance in the jurisdiction through voluntary frameworks, sectoral initiatives or comprehensive legislative approaches.
Global AI Governance Law and Policy
Jurisdiction Overviews 2025
The overview page for this series can be accessed here.
-
expand_more
Additional AI resources






Approved