Resource Center / Resource Articles / Global AI Governance Law and Policy: US
Global AI Governance Law and Policy: US
This article, part of a series co-sponsored by HCLTech, analyzes the laws, policies, and broader contextual history and developments relevant to AI governance in the United States. The full series can be accessed here.
Published: September 2025
Contributors:
Navigate by Topic
The U.S. lacks an omnibus federal law that specifically targets artificial intelligence governance. A market-driven approach of self-regulation has been traditionally preferred over government intervention when addressing emerging risks of privacy, civil rights and antitrust, reflecting an effort to foster competitive innovation.
As such, federal involvement in AI policy has mainly come from the issuance of agency guidance opinions when interpreting existing statute in the context of the usage of AI technology. Additionally, executive orders issued by the recent several presidential administrations have directed federal government policy and practice on AI governance, catalyzing a series of agency regulations focused on government use of AI.
The U.S. established the Center for AI Standards and Innovation, housed within the National Institute of Standards and Technology and aided by a consortium of over 280 AI stakeholders who support its mission.
Numerous states have proposed and, in some cases, enacted AI laws. Colorado was the first to enact comprehensive, state-level, AI regulation that focuses on algorithmic discrimination. California has enacted a series of legislation to address several of the key concerns that have risen since the advent of AI. Federal agencies, including the Federal Trade Commission, have made it clear their existing legal authorities extend to the use of new technologies, including AI.
The formal inception of AI as a field of academic research can be traced to Dartmouth College in Hanover, New Hampshire. In 1955, a group of scientists and mathematicians gathered for a summer workshop to test the idea that "every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it."
Several broad strategic drivers guide the U.S.'s approach to federally regulating AI. At a national policy level, Congress and, to some extent, the current administration’s agencies have deliberately taken a light-touch and business-friendly approach.
This is founded on three key motivations. The first is a desire to see U.S. companies retain and expand their global AI leadership, particularly in competition with China. The second is the thought that innovation, development and deployment are stifled by governmental involvement. The third motivation is a philosophical belief that market-driven solutions are better suited to identifying and addressing market concerns than government intervention.
The AI Action Plan released in July 2025 seeks to advance these inclinations, implementing policies that accelerate AI innovation in the U.S. by dismantling regulatory obstacles, building American AI infrastructure with leaner permitting and funding incentives to foster construction and skills training, and leading in international AI diplomacy and security by promoting AI exports to allies as a default and prioritizing military and cybersecurity AI innovation for rapid government adoption.
Tortoise Media's September 2024 Global AI Index ranked the U.S. first in the world for its AI talent, infrastructure, research and development, and commercial investment. The U.S. earns the silver medal in two metrics: first, it lags slightly behind Italy in AI operating environment category, which measures AI-related public opinion, labor mobility and treatment in legislative proceedings. Second, only Saudi Arabia has publicly announced more government spending on AI. However, in the time after the report’s publication, attitudes in the public and private sectors have changed significantly. Lawmakers are working to develop strategies around emerging AI technologies in ways that keep the U.S. at the forefront of AI development and deployment.
The U.S. federal approach to regulating AI has primarily come from actions taken by the executive and legislative branches, supplemented by increasingly active state-level initiatives. The executive branch has focused on two primary strategies: the promulgation of guidelines and standards through federal agencies and industry self-regulation, referred to as regulatory sandboxes to foster flexible and innovative development.
For the most part, Congress has relied on existing legislation to adapt to the new challenges AI poses. This includes integrating AI concepts and applications into existing laws, such as civil rights, consumer protection, and antitrust, and bridging gaps as they go, rather than enacting an entirely new regulatory framework. However, states enacting their own AI legislation create a statutory patchwork of varying cross-jurisdictional rules and regulations for the private sector to navigate.
-
expand_more
Executive Actions
On 23 July 2025, the Trump administration released America’s AI Action Plan, a broad policy document focused on fostering U.S. AI development and innovation. This plan builds on multiple previously issued AI-related executive orders, the first of which came out in 2019 during the first Trump administration.
The plan lays out the Trump administration’s vision for how the U.S. can win the global AI race, such as building energy infrastructure to power new data centers and supply chains necessary to run computationally intense models. The Trump administration sees AI as an economic engine; the website for the AI Action Plan states that "whoever has the largest AI ecosystem will set the global standards and reap broad economic and security benefits." While the plan lays out the government’s vision for AI’s economic impact and the support it needs, it is relatively limited on regulatory governance. Many of the executive orders signed by the president expand and clarify the administration’s vision for AI.
During his first administration, Trump signed Executive Order 13859, Maintaining American Leadership in Artificial Intelligence. This executive order highlights AI’s importance to national security, the economy, and public trust and establishes the American AI Initiative to guide policy. The plan focuses on driving research and development, improving access to federal data and computing resources, developing technical standards, training the workforce, promoting international and intersectoral cooperation, and protecting U.S. advantages from foreign threats. It tasks federal agencies with prioritizing AI in budgets, research, education and regulation, all under coordinated oversight.
In December 2020, Trump signed Executive Order 13960, Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government. It encourages federal agencies to use AI "to improve Government operations and services in a manner that fosters public trust, builds confidence in AI, protects our Nation’s values, and remains consistent with all applicable laws, including those related to privacy, civil rights, and civil liberties." The executive order promotes responsible use through principles like accuracy and resiliency and task-oriented leadership in agencies like the Office of Management and Budget and the Federal Chief Information Officers Council to develop guidance, criteria and use plans for AI.
Former President Joe Biden also signed several executive orders relating to AI, representing a preference for a push toward regulatory oversight, imposing requirements for safety testing and reporting. Executive Order 14141, Advancing United States Leadership in Artificial Intelligence Infrastructure, directs the Department of Defense and Department of Energy to identify sites on federal land on which to build AI data centers.
At the beginning of his second term, Trump signed broader executive orders containing policies and directives pertinent to AI. These orders repealed some of the prior administration’s policies, such as Executive Order 14148, Initial Rescissions of Harmful Executive Orders and Actions, which retracts earlier executive orders with the aim of reducing regulatory burdens across sectors.
Trump later signed Executive Order 14275, Restoring Common Sense to Federal Procurement, which significantly reduces the Federal Acquisition Regulation with the goal of making procurement more efficient. Executive Order 14277, Advancing Artificial Intelligence Education for American Youth, creates a task force on AI education, creates a presidential AI challenge to encourage student adoption of AI, provides for AI training and professional development for teachers, and instructs the secretary of labor to develop AI-related registered apprenticeships.
On 23 January, Trump signed Executive Order 14179, Removing Barriers to American Leadership in Artificial Intelligence, which called for members of several agencies to create the AI Action Plan. On 23 July, the day the AI Action Plan was released, Trump signed additional executive orders that put some of the plan’s key points into action.
Executive Order 14318, Accelerating Federal Permitting of Data Center Infrastructure, aims to streamline environmental permitting for AI data centers by simplifying steps and removing rules in the process. Executive Order 14320, Promoting the Export of the American AI Technology Stack, creates the American AI Exports Program to promote export of "full-stack AI technology packages," which include all of the hardware and software necessary to deploy AI from start to finish, like graphics cards, the model itself and training data.
-
expand_more
The OMB
The OMB issued two AI memos in response to Executive Order 14179. According to a White House fact sheet, the first memo, Accelerating Federal Use of AI through Innovation, Governance, and Public Trust, "gives agencies the tools necessary to embrace AI innovation, while maintaining strong protections for Americans’ privacy, civil rights, and civil liberties." It instructs federal agencies to increase their use of AI to innovate, "cut down on bureaucratic bottlenecks," and make the government run smoothly and efficiently while implementing risk management practices. To guide how the government acquires AI, the second memo, Driving Efficient Acquisition of Artificial Intelligence in Government, emphasizes prioritizing competition and American-made AI systems.
-
expand_more
The FTC
At the U.S. AI Summit 2025 in June, FTC Commissioner Melissa Holyoak delivered a keynote speech about the rapid developments in the AI space and how they present novel antitrust enforcement challenges. The first step in determining if a company has a monopoly in a particular market is to define what the market is. She noted that AI is "a technology that is projected to be both a critical input to and potentially a competitor with almost every firm in the economy."
Holyoak further noted AI's widespread presence in nearly all markets makes it difficult to determine what companies or products it competes with. She raised concerns that AI companies are trying to draw in users and developers to their platforms with low cost or free access. Once those users have already locked into the AI’s infrastructure, these companies could raise those prices in the future. Holyoak emphasized that the FTC and DOJ do not make proactive regulations, but instead intervene after a violation, enforce the laws, and issue guidance on how other companies can avoid breaking laws in the future.
In early 2025, the FTC released a preliminary staff report on large AI partnerships and investments, which provides insights on the "corporate partnerships and investments" that connect cloud computing companies with AI companies. The report finds that these partnerships could impact competition by, for example, attracting a large portion of AI talent and giving them access to "sensitive technical and business information that may be unavailable to others."
The FTC has filed many complaints and settlements with companies that use AI in allegedly anticompetitive ways. For example, in fall 2024 the FTC announced Operation AI Comply, an enforcement sweep against companies they claim misused "AI hype" to defraud consumers. These companies include several "passive income" e-commerce operations like Ascend Ecom that allegedly sold AI-powered software, inventory, and a spot in online marketplaces. Ascend Ecom promised to help the consumer earn tens of thousands of dollars a month in passive income, which never materialized. The FTC has settled with Ascend and others, including FBA Machine and its owner Bratislav Rosenfeld. The settlements usually permanently forbid the company or person from operating any kind of similar business.
Other enforcement actions similarly focus on companies falsely representing the capabilities of their AI. DoNotPay, the company promising to replace human lawyers with AI, faced a complaint and consent order prohibiting it from stating or implying that its products operate like a human lawyer. Another, Ryter, offered a service where consumers could generate unlimited product reviews that allegedly misled online shoppers; it has since been barred from selling any similar service.
While the U.S. lacks a comprehensive law designed to regulate AI, Congress has been active on the AI front. It has introduced targeted AI-related bills including the NO FAKES Act of 2024 and passed a raft of legislation including the AI Training Act, the National AI Initiative Act of 2020, the AI in Government Act of 2020, and the TAKE IT DOWN Act of 2025. While each of these has often been a lesser component of larger appropriations bills, their presence remains noteworthy. The scope of these measures mirrored executive branch actions designed to facilitate AI adoption within the federal government and achieve coordination among federal agencies in its application.
The NO FAKES Act, first introduced in 2024 and recently re-introduced on 9 April 2025 as S.1367, seeks to protect the voice and visual likeness of individuals from unauthorized digitally generated recreations, such as through the use of generative AI. The law, which would preempt state legislation in the same area, would require internet gatekeepers to remove unauthorized recreations or replicas of audiovisual works, images or sound recordings.
The AI Training Act requires the director of the OMB to create an AI training program for employees of executive agencies. The National AI Initiative Act of 2020, included within a larger budget law, creates the National AI Initiative Office, which oversees and implements the U.S. national AI strategy. The AI in Government Act of 2020, also within a budget law, creates the AI Center of Excellence, which facilitates AI adoption in the federal government. The TAKE IT DOWN Act of 2025 prohibits the online publication of nonconsensual intimate visual depictions, including computer-generated images, requiring online platforms to remove them within 48 hours of notification.
The federal approach contrasts sharply with state-level initiatives, as demonstrated by congressional consideration of imposing preemptive law in the One Big Beautiful Bill Act. Originally, the act included a moratorium on all state-level AI legislation enforcement for 10 years, further indicating the federal preference for self-regulation, but the Senate removed the provision with a vote of 99-1. The moratorium would have targeted laws that impose AI-specific duties on developers and deployers, including model registration, risk assessments, watermarking and disclosure rules, audits, and private rights of action.
Through the Senate's AI Insight Forum and bipartisan framework on AI legislation and the House of Representative's bipartisan Task Force on AI, members of Congress have continued to explore how the legislature should address the promises and challenges of AI. The proposals have ranged from establishing a licensing regime administered by an independent oversight body to holding AI companies liable for privacy and civil rights harms via enforcement and private rights of action. They additionally call for mandatory disclosures by AI developers to regarding information about the training data, limitations, accuracy and safety of their models.
States have taken action to propose and implement comprehensive legislation to fill in the gaps where the federal government has elected to employ temperance or abstinence. The consequence has been a mosaic of differing and overlapping rules and regulations with varying degrees of minimum and maximum effect and limitations.
Colorado’s Artificial Intelligence Act, enacted in May 2024, represents the most comprehensive state level AI regulation to date. Initially slated to take effect 1 Feb. 2026, the date has been pushed back to 30 June 2026, pending governor approval, it requires developers and deployers of high-risk AI systems to implement risk management practices and conduct impact assessments to prevent algorithmic discrimination in consequential decisions that would affect housing, employment, education, health care, and other critical areas.
Other states, like California and New York, have taken a sectoral approach rather than a comprehensive one to AI regulation, targeting specific industries rather than having an umbrella regulatory scheme. In 2024, California Governor Gavin Newsom signed several legislative packages around AI, defining “artificial intelligence” (California Assembly Bill 2885) and addressing many of the risks arising from its use. For example, California lawmakers sought to ensure transparency through measures such as watermarking (SB 942) and the obligation for developers to publish documentation on training data for AI systems made available publicly on the internet.
The distribution of certain AI creations were criminalized, such as nonconsensual, intimate, or deepfake images (SB-926 and SB-981) and child sexual abuse materials (AB-1831 and SB-1381). California also took steps to protect the acting profession and political transparency, obligating the entertainment industry to obtain consent from actors or their estates to replicate their image (AB-2602 and AB-1836). Bills also passed requiring the disclosure of AI-generated content in political advertisements during election periods (AB-2355 and AB-2839). Also enacted where series of consumer protection laws requiring the disclosure of AI-generated voices used for robocalls (AB-2905) and health care communications (AB-3030).
Continuing the sectoral approach, in January 2025, New York state enacted legislation, amending it’s already existing General Business Law to impose safety regulation on AI companions, systems simulating ongoing human-like interactions. In January 2024, New York legislators passed legislation requiring state agencies to assess and oversee their own AI usage without human oversight. At press time, New York is currently considering more expansive legislation such as the RAISE Act, which would regulate “frontier AI models,” establishing safeguards, reporting and disclosure obligations, and other requirements for large developers of frontier AI models.
Illinois has also targeted worker protection, enacting House Bill 3773 in August 2024, amending the Illinois Human Rights Act to include regulation for AI use in employment decisions. Effective 1 Jan. 2026, the law requires employers to provide notice when using AI for hiring, promotions, or terminations, and prohibits AI systems that discriminate based on protective characteristics.
The scope of state action is becoming extensive. In 2024, 700 AI legislative proposals were made, and 45 states, Puerto Rico, Washington D.C. and the U.S. Virgin Islands introduced AI bills; thirty-one states, Puerto Rico and the U.S. Virgin Islands enacted legislation or adopting resolutions. Such proactive legislation is not limited only to the state level as local municipalities weighed in as well. For instance, in 2023, New York City enacted NYC Local Law 144, which requires bias audits for AI tools used in employment decisions.
In line with the U.S.'s long history of favoring a self-regulatory approach to industry, informal commitments have been a key policy pool in its regulatory approach to AI. In July 2023, Amazon, Google, Meta, Microsoft and several other AI companies convened at the White House and pledged their voluntary commitment to principles around the safety, security and trust of AI. These principles include ensuring products are safe before introducing them onto the market and prioritizing investments in cybersecurity and security-risk safeguards.
-
expand_more
NIST's AI Risk Management Framework
Perhaps the strongest example of the U.S.'s approach to AI regulation within the paradigm of industry self-regulation is NIST’s AI Risk Management Framework, released in January 2023. The AI Risk Management Framework aims to serve as "a resource to the organizations designing, developing, deploying or using AI systems to help manage the many risks of AI." To facilitate implementation of the framework, NIST subsequently launched the Trustworthy and Responsible AI Resource Center, which provides operational resources, including a knowledge base, use cases, events and training.
-
expand_more
NTIA's AI Accountability Policy
The National Telecommunications and Information Administration's Artificial Intelligence Accountability Policy also falls into the self-regulation category. The report provides guidance and recommendations for AI developers and deployers to establish, enhance and use accountability inputs to provide assurance to external stakeholders.
The autonomous nature of agentic AI, used as automated tools for project and operations management, creates unique and regulatory challenges, particularly around accountability and liability. Traditional regulatory frameworks struggle to relevantly address any potentially harmful agentic AI decisions and actions because models are more efficient and better able than humans are to coordinate and manage multiple tasks across varying functions at once.
This raises questions about human oversight requirements and responsibility chains. At both the federal and state levels, the U.S. does not currently have specific legislation targeting agentic AI as a technology. Sector-specific legislation will likely apply to AI agents, especially as they might be used in highly regulated industries, such as finance, insurance, medicine, or employment. This has also been true of state laws in practice that apply to AI in these areas. U.S. agencies working on standards and regulation for AI will likely include considerations for agentic AI, such as when NIST will revise the AI Risk Management Framework.
This section covers regulatory actions and discussions from before the implementation of the AI Action Plan, which promises to pivot towards a more limited, market-driven approach to AI oversight. The material here remains relevant as context and record, but it reflects a different regulatory climate than the one shaping policy today.
-
expand_more
Intellectual property
In the realm of intellectual property, efforts undertaken by the U.S. Patent and Trademark Office have centered on incentivizing innovation and inclusivity within AI and emerging technologies. The AI and Emerging Technologies Partnership program brings the USPTO together with the AI and emerging technologies communities from academia, industry, government and civil society. The partnership hosts listening sessions and provides public symposia and guidance at the intersection of AI and intellectual property.
The thorny copyright law and policy issues raised by AI have been on the radar of the U.S. Copyright Office for several years. Since its AI initiative launched in 2023, the office has held numerous public listening sessions and webinars. The Copyright Office also issued a notice of inquiry on copyright and AI to inform its future guidance.
Several important cases have been decided on or are in the courts. Regarding AI and the fair-use doctrine, the Dow Jones & Company along with NYP Holdings — publishers of the Wall Street Journal and New York Post — sued Perplexity AI for copyright infringement and trademark violations. The lawsuit claims Perplexity scraped copyrighted material without authorization, which the news outlets indicate harms their advertising and subscription profits. The news outlets also claim the results from Perplexity’s system often include the exact text from the original news articles or attribute false information to the publications. The outcome of this case could set legal precedents regarding fair use and the application of copyright law to generative AI systems.
In a lawsuit brought against Anthropic by a group of authors, the courts ruled that although purchased books can be used to train AI models, pirated books may not as they do not fall under the fair use doctrine. The opinion likens machine learning models digesting books to a young author learning to write by reading books to better their technique. With the lack of clarity from Congress about the use of copyrighted materials to train AI, it is likely these lawsuits will have a significant impact on the applicability of the fair use doctrine. If lawyers decide the use of copyrighted materials without licensing infringes on the copyright, companies will either have to pay large fees for the infringement and/or retrain their models on datasets of licensed materials.
-
expand_more
Employment
The Equal Opportunity Employment Commission's AI and Algorithmic Fairness Initiative was launched to ensure AI, machine learning and other emerging technologies comply with federal civil rights law. Through the initiative, the EEOC provided the public with information and guidance on the use of AI in making job decisions for people with disabilities, mitigating discrimination and bias in automated systems, and assessing the adverse impacts of the technology in employment decisions.
One of the first landmark cases to demonstrate the complexity of applying traditional anti-discrimination regulatory frameworks to algorithmic decision-making was EEOC v. iTutorGroup. The EEOC successfully sued iTutorGroup, a Chinese company hiring U.S.-based tutors to provide English language tutoring to adult students in China. The company used an automated hiring system that would reject female applicants aged 55 or older and male applicants aged 60 or older. The case resulted in a USD365,000 settlement, establishing liability obligations for employers using AI tools for employment purposes.
The second significant case, Mobley v. Workday, currently in litigation as of press time, is a class-action lawsuit originally filed by a worker that claimed he was discriminated against based on race by Workday’s AI-powered job applicant screening system. This case is particularly significant because it addresses not just the employer, but the liability of the vendor providing the AI tools. While the judge did not find any intentional discrimination by the software provider, she did not rule out that the software did not discriminate against applicants and allowed the lawsuit to go forward. If the applicants succeed in their case against Workday, it could substantially increase the duty of care for human resources software providers who use AI in the hiring process.
These cases highlight a fundamental tension in AI regulation for employment protections. AI can perpetuate or amplify historical and systemic biases previously practiced by an organization, whether known or unrealized, as a consequence of the training data it receives.
-
expand_more
AI in legal practice
AI technology presents a unique challenge for the legal profession, raising questions about ethical obligations around attorney-client privilege and confidentiality. The efficiency that AI offers, in areas such as mergers and acquisitions and litigation, is becoming more prevalent and too difficult to ignore. However, the issues go beyond just the complexities of general AI governance.
Attorney-client privilege traditionally protects confidential communication between lawyers and clients. Some matters which are extremely complex for a person — often due to heavy paper load requiring extensive amounts of time and effort to review and correlate — can be more efficiently processed and coordinated by AI systems and tools. The results are attractive cost savings for the client. The allure of such AI tools is therefore high.
AI systems, however, are not currently air-gapped and often require transmitting client information to third-party servers or cloud services. This is especially true for generative AI systems like ChatGPT or Claude. This can potentially breach the privilege right under the current rules, requiring legal professionals to navigate between the efficiency gains AI provides and the risk of inadvertent disclosure of privileged information.
Client information entered into AI prompts may be processed and used to train future models. In other words, the information is not stored in isolation; it becomes integrated into the AI’s knowledge base and reasoning pattern. This raises the risk of unconscious application of insights gained from one client by the AI system when advising another client on matters involving competing interests.
For instance, AI excels at pattern recognition across the large data sets. By processing information from multiple firms, proprietarily developed legal strategies for deal structures and risk assessment could be unconsciously applied for the benefit of one client to the detriment of another simply because the firms involved are using the same AI system. An AI system could use this information to recognize industry trends, negotiation strategies and/or legal vulnerabilities that should have remained compartmentalized. The AI system then unwittingly applies it to the benefit of one party in competition with another, which represents an unfair intelligence transfer. The AI system could potentially combine privileged litigation strategy that was shared by a firm with public court filings and inadvertently reveal adverse parties' tactical approaches to trial and/or settlement positions.
Continuing the patchwork approach in the U.S., several state bar associations have issued guidance on AI use in legal practice. For example:
The Florida Bar issued Ethics Opinion 24-1 that states lawyers are allowed to use generative AI if they obtain a client's consent for use with their confidential information, investigate the AI systems security measures and retention policies to ensure privilege is maintained, and maintain direct oversight of all AI-generated work product for reliability and accuracy.
The State Bar of California issued written guidance for lawyers, requiring anonymization of client information when using AI systems, diligent security review with consultation from IT professionals of AI systems used, terms of use review to ensure client information is not used for training, and oversight of AI generated work product for reliability and accuracy.
The New York City Bar Association issued Formal Opinion 2024-5, providing guidance similar to that offered by California.
Until agentic AI can be properly and economically air-gapped, the concerns of privilege and conflict will remain for attorneys.
In February 2025, the G7 countries created a voluntary AI reporting framework "to encourage transparency and accountability among organizations developing advanced AI systems." The framework came from the Hiroshima AI Process, a collaboration between the G7 to provide low-friction tools that can scale without binding regulation. The reporting framework invites developers of advanced systems to publish standardized reports tied to the HAIP code of conduct.
In his remarks at the Paris AI Action Summit on 11 February 2025, Vice President JD Vance urged countries to avoid "excessive regulation" and emphasized U.S. ambitions for AI growth; the U.S. and U.K. subsequently declined to sign the summit declaration focused on "inclusive and sustainable artificial intelligence."
In parallel, NIST’s Center for AI Standards and Innovation is coordinating technical work through a 280-plus member consortium on testing and standards and has cooperation agreements with leading model developers to support safety research. In January 2025, NIST and its Center for AI Standards and Innovation hosted a workshop for AI experts to "provide a comprehensive taxonomy" of agentic AI tools. NIST published "lessons learned" from the workshop in August, identifying two potential taxonomies of AI tools: one based on "what they enable the model to do," and the other focusing on what constraints limit the tool’s capabilities.
In May, the Department of Commerce rescinded the Biden-era AI Diffusion Rule, which limited exports of AI model weights and advanced chips based on a tiered country classification system. It required licenses for exporting to most countries, with potential exceptions for allied countries and presumptive license denial for countries like China and Russia. Instead, DOC stated that it would issue a replacement rule in the future with fewer sweeping regulations.
In the U.S., the few law and policy developments related to AI are in the acceleration phase. Here’s a limited preview of what to expect in the near future.
- New AI Risk Management Framework: The AI Action Plan instructs NIST to revise the AI Risk Management Framework and develop a 2025 National AI Research and Development Strategic Plan. The period for comment on this new plan has closed.
- Congress watchlist: The 119th Congress has proposed several bills that impact AI, including the following:
- The CREATE AI Act would increase access to AI research and development tools.
- The No Adversarial AI Act would bar federal use of AI from adversary countries.
- The TEST AI Act would set up NIST AI testbeds.
- The NO FAKES Act would create a federal right against unauthorized AI replicas of one’s voice or likeness.
- OMB timelines: The OMB memos require CFO agencies to publish an AI strategy and file public compliance plans within 180 days of 3 April 2025; agencies must then continue to update these plans every two years until 2036. The agencies must also update internal data privacy policies and issue AI use policies within 270 days. They must maintain public AI use cases annually.
The U.S. federal government’s market-driven approach is intended to encourage rapid innovation and competitiveness in the world AI market. While other jurisdictions forge forward with comprehensive rules and requirements, like the EU AI Act, the U.S. has elected to leave the issues of systemic risk management to voluntary self-regulation. The practical impact of these different approaches will become clearer as industry practices evolve and as policymakers assess whether existing frameworks adequately address emerging challenges.
Articles in this series, co-sponsored by HCLTech, dive into the laws, policies, broader contextual history and developments relevant to AI governance across different jurisdictions. The selected jurisdictions are a small but important snapshot of distinct approaches to AI governance regulation in key global markets.
Each article provides a breakdown of the key sources and instruments that govern the strategic, technological and compliance landscape for AI governance in the jurisdiction through voluntary frameworks, sectoral initiatives or comprehensive legislative approaches.
Global AI Governance Law and Policy
Jurisdiction Overviews 2025
The overview page for this series can be accessed here.
- Australia
- Canada
- China
- European Union
- India
- Japan
- Singapore
- South Korea
- United Arab Emirates
- United Kingdom
- United States
-
expand_more
Additional AI resources