Resource Center / Resource Articles / Global AI Governance Law and Policy: US

 

Global AI Governance Law and Policy: US

This article, part of a series co-sponsored by HCLTech, analyzes the laws, policies, and broader contextual history and developments relevant to AI governance in the United States. The full series can be accessed here.


Published: September 2025


Contributors:


Navigate by Topic

The U.S. lacks an omnibus federal law that specifically targets artificial intelligence governance. A market-driven approach of self-regulation has been traditionally preferred over government intervention when addressing emerging risks of privacy, civil rights and antitrust, reflecting an effort to foster competitive innovation.

As such, federal involvement in AI policy has mainly come from the issuance of agency guidance opinions when interpreting existing statute in the context of the usage of AI technology. Additionally, executive orders issued by the recent several presidential administrations have directed federal government policy and practice on AI governance, catalyzing a series of agency regulations focused on government use of AI.

The U.S. established the Center for AI Standards and Innovation, housed within the National Institute of Standards and Technology and aided by a consortium of over 280 AI stakeholders who support its mission.

Numerous states have proposed and, in some cases, enacted AI laws. Colorado was the first to enact comprehensive, state-level, AI regulation that focuses on algorithmic discrimination. California has enacted a series of legislation to address several of the key concerns that have risen since the advent of AI. Federal agencies, including the Federal Trade Commission, have made it clear their existing legal authorities extend to the use of new technologies, including AI.


History and context

The formal inception of AI as a field of academic research can be traced to Dartmouth College in Hanover, New Hampshire. In 1955, a group of scientists and mathematicians gathered for a summer workshop to test the idea that "every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it."

Several broad strategic drivers guide the U.S.'s approach to federally regulating AI. At a national policy level, Congress and, to some extent, the current administration’s agencies have deliberately taken a light-touch and business-friendly approach.

This is founded on three key motivations. The first is a desire to see U.S. companies retain and expand their global AI leadership, particularly in competition with China. The second is the thought that innovation, development and deployment are stifled by governmental involvement. The third motivation is a philosophical belief that market-driven solutions are better suited to identifying and addressing market concerns than government intervention.

The AI Action Plan released in July 2025 seeks to advance these inclinations, implementing policies that accelerate AI innovation in the U.S. by dismantling regulatory obstacles, building American AI infrastructure with leaner permitting and funding incentives to foster construction and skills training, and leading in international AI diplomacy and security by promoting AI exports to allies as a default and prioritizing military and cybersecurity AI innovation for rapid government adoption.

Tortoise Media's September 2024 Global AI Index ranked the U.S. first in the world for its AI talent, infrastructure, research and development, and commercial investment. The U.S. earns the silver medal in two metrics: first, it lags slightly behind Italy in AI operating environment category, which measures AI-related public opinion, labor mobility and treatment in legislative proceedings. Second, only Saudi Arabia has publicly announced more government spending on AI. However, in the time after the report’s publication, attitudes in the public and private sectors have changed significantly. Lawmakers are working to develop strategies around emerging AI technologies in ways that keep the U.S. at the forefront of AI development and deployment.


Approach to regulation

The U.S. federal approach to regulating AI has primarily come from actions taken by the executive and legislative branches, supplemented by increasingly active state-level initiatives. The executive branch has focused on two primary strategies: the promulgation of guidelines and standards through federal agencies and industry self-regulation, referred to as regulatory sandboxes to foster flexible and innovative development.

For the most part, Congress has relied on existing legislation to adapt to the new challenges AI poses. This includes integrating AI concepts and applications into existing laws, such as civil rights, consumer protection, and antitrust, and bridging gaps as they go, rather than enacting an entirely new regulatory framework. However, states enacting their own AI legislation create a statutory patchwork of varying cross-jurisdictional rules and regulations for the private sector to navigate.

  • expand_more

  • expand_more

  • expand_more


Congress

While the U.S. lacks a comprehensive law designed to regulate AI, Congress has been active on the AI front. It has introduced targeted AI-related bills including the NO FAKES Act of 2024 and passed a raft of legislation including the AI Training Act, the National AI Initiative Act of 2020, the AI in Government Act of 2020, and the TAKE IT DOWN Act of 2025. While each of these has often been a lesser component of larger appropriations bills, their presence remains noteworthy. The scope of these measures mirrored executive branch actions designed to facilitate AI adoption within the federal government and achieve coordination among federal agencies in its application.

The NO FAKES Act, first introduced in 2024 and recently re-introduced on 9 April 2025 as S.1367, seeks to protect the voice and visual likeness of individuals from unauthorized digitally generated recreations, such as through the use of generative AI. The law, which would preempt state legislation in the same area, would require internet gatekeepers to remove unauthorized recreations or replicas of audiovisual works, images or sound recordings.

The AI Training Act requires the director of the OMB to create an AI training program for employees of executive agencies. The National AI Initiative Act of 2020, included within a larger budget law, creates the National AI Initiative Office, which oversees and implements the U.S. national AI strategy. The AI in Government Act of 2020, also within a budget law, creates the AI Center of Excellence, which facilitates AI adoption in the federal government. The TAKE IT DOWN Act of 2025 prohibits the online publication of nonconsensual intimate visual depictions, including computer-generated images, requiring online platforms to remove them within 48 hours of notification.

The federal approach contrasts sharply with state-level initiatives, as demonstrated by congressional consideration of imposing preemptive law in the One Big Beautiful Bill Act. Originally, the act included a moratorium on all state-level AI legislation enforcement for 10 years, further indicating the federal preference for self-regulation, but the Senate removed the provision with a vote of 99-1. The moratorium would have targeted laws that impose AI-specific duties on developers and deployers, including model registration, risk assessments, watermarking and disclosure rules, audits, and private rights of action.

Through the Senate's AI Insight Forum and bipartisan framework on AI legislation and the House of Representative's bipartisan Task Force on AI, members of Congress have continued to explore how the legislature should address the promises and challenges of AI. The proposals have ranged from establishing a licensing regime administered by an independent oversight body to holding AI companies liable for privacy and civil rights harms via enforcement and private rights of action. They additionally call for mandatory disclosures by AI developers to regarding information about the training data, limitations, accuracy and safety of their models.


State-level regulation

States have taken action to propose and implement comprehensive legislation to fill in the gaps where the federal government has elected to employ temperance or abstinence. The consequence has been a mosaic of differing and overlapping rules and regulations with varying degrees of minimum and maximum effect and limitations.

Colorado’s Artificial Intelligence Act, enacted in May 2024, represents the most comprehensive state level AI regulation to date. Initially slated to take effect 1 Feb. 2026, the date has been pushed back to 30 June 2026, pending governor approval, it requires developers and deployers of high-risk AI systems to implement risk management practices and conduct impact assessments to prevent algorithmic discrimination in consequential decisions that would affect housing, employment, education, health care, and other critical areas.

Other states, like California and New York, have taken a sectoral approach rather than a comprehensive one to AI regulation, targeting specific industries rather than having an umbrella regulatory scheme. In 2024, California Governor Gavin Newsom signed several legislative packages around AI, defining “artificial intelligence” (California Assembly Bill 2885) and addressing many of the risks arising from its use. For example, California lawmakers sought to ensure transparency through measures such as watermarking (SB 942) and the obligation for developers to publish documentation on training data for AI systems made available publicly on the internet.

The distribution of certain AI creations were criminalized, such as nonconsensual, intimate, or deepfake images (SB-926 and SB-981) and child sexual abuse materials (AB-1831 and SB-1381). California also took steps to protect the acting profession and political transparency, obligating the entertainment industry to obtain consent from actors or their estates to replicate their image (AB-2602 and AB-1836). Bills also passed requiring the disclosure of AI-generated content in political advertisements during election periods (AB-2355 and AB-2839). Also enacted where series of consumer protection laws requiring the disclosure of AI-generated voices used for robocalls (AB-2905) and health care communications (AB-3030).

Continuing the sectoral approach, in January 2025, New York state enacted legislation, amending it’s already existing General Business Law to impose safety regulation on AI companions, systems simulating ongoing human-like interactions. In January 2024, New York legislators passed legislation requiring state agencies to assess and oversee their own AI usage without human oversight. At press time, New York is currently considering more expansive legislation such as the RAISE Act, which would regulate “frontier AI models,” establishing safeguards, reporting and disclosure obligations, and other requirements for large developers of frontier AI models.

Illinois has also targeted worker protection, enacting House Bill 3773 in August 2024, amending the Illinois Human Rights Act to include regulation for AI use in employment decisions. Effective 1 Jan. 2026, the law requires employers to provide notice when using AI for hiring, promotions, or terminations, and prohibits AI systems that discriminate based on protective characteristics.

The scope of state action is becoming extensive. In 2024, 700 AI legislative proposals were made, and 45 states, Puerto Rico, Washington D.C. and the U.S. Virgin Islands introduced AI bills; thirty-one states, Puerto Rico and the U.S. Virgin Islands enacted legislation or adopting resolutions. Such proactive legislation is not limited only to the state level as local municipalities weighed in as well. For instance, in 2023, New York City enacted NYC Local Law 144, which requires bias audits for AI tools used in employment decisions.

Self-regulation

In line with the U.S.'s long history of favoring a self-regulatory approach to industry, informal commitments have been a key policy pool in its regulatory approach to AI. In July 2023, Amazon, Google, Meta, Microsoft and several other AI companies convened at the White House and pledged their voluntary commitment to principles around the safety, security and trust of AI. These principles include ensuring products are safe before introducing them onto the market and prioritizing investments in cybersecurity and security-risk safeguards.

  • expand_more

  • expand_more


Agentic AI

The autonomous nature of agentic AI, used as automated tools for project and operations management, creates unique and regulatory challenges, particularly around accountability and liability. Traditional regulatory frameworks struggle to relevantly address any potentially harmful agentic AI decisions and actions because models are more efficient and better able than humans are to coordinate and manage multiple tasks across varying functions at once.

This raises questions about human oversight requirements and responsibility chains. At both the federal and state levels, the U.S. does not currently have specific legislation targeting agentic AI as a technology. Sector-specific legislation will likely apply to AI agents, especially as they might be used in highly regulated industries, such as finance, insurance, medicine, or employment. This has also been true of state laws in practice that apply to AI in these areas. U.S. agencies working on standards and regulation for AI will likely include considerations for agentic AI, such as when NIST will revise the AI Risk Management Framework.


Wider regulatory environment

This section covers regulatory actions and discussions from before the implementation of the AI Action Plan, which promises to pivot towards a more limited, market-driven approach to AI oversight. The material here remains relevant as context and record, but it reflects a different regulatory climate than the one shaping policy today.

  • expand_more

  • expand_more

  • expand_more


International strategy

In February 2025, the G7 countries created a voluntary AI reporting framework "to encourage transparency and accountability among organizations developing advanced AI systems." The framework came from the Hiroshima AI Process, a collaboration between the G7 to provide low-friction tools that can scale without binding regulation. The reporting framework invites developers of advanced systems to publish standardized reports tied to the HAIP code of conduct.

In his remarks at the Paris AI Action Summit on 11 February 2025, Vice President JD Vance urged countries to avoid "excessive regulation" and emphasized U.S. ambitions for AI growth; the U.S. and U.K. subsequently declined to sign the summit declaration focused on "inclusive and sustainable artificial intelligence."

In parallel, NIST’s Center for AI Standards and Innovation is coordinating technical work through a 280-plus member consortium on testing and standards and has cooperation agreements with leading model developers to support safety research. In January 2025, NIST and its Center for AI Standards and Innovation hosted a workshop for AI experts to "provide a comprehensive taxonomy" of agentic AI tools. NIST published "lessons learned" from the workshop in August, identifying two potential taxonomies of AI tools: one based on "what they enable the model to do," and the other focusing on what constraints limit the tool’s capabilities.

In May, the Department of Commerce rescinded the Biden-era AI Diffusion Rule, which limited exports of AI model weights and advanced chips based on a tiered country classification system. It required licenses for exporting to most countries, with potential exceptions for allied countries and presumptive license denial for countries like China and Russia. Instead, DOC stated that it would issue a replacement rule in the future with fewer sweeping regulations.


Latest developments

In the U.S., the few law and policy developments related to AI are in the acceleration phase. Here’s a limited preview of what to expect in the near future.

  • New AI Risk Management Framework: The AI Action Plan instructs NIST to revise the AI Risk Management Framework and develop a 2025 National AI Research and Development Strategic Plan. The period for comment on this new plan has closed.
  • Congress watchlist: The 119th Congress has proposed several bills that impact AI, including the following:
    • The CREATE AI Act would increase access to AI research and development tools.
    • The No Adversarial AI Act would bar federal use of AI from adversary countries.
    • The TEST AI Act would set up NIST AI testbeds.
    • The NO FAKES Act would create a federal right against unauthorized AI replicas of one’s voice or likeness.
  • OMB timelines: The OMB memos require CFO agencies to publish an AI strategy and file public compliance plans within 180 days of 3 April 2025; agencies must then continue to update these plans every two years until 2036. The agencies must also update internal data privacy policies and issue AI use policies within 270 days. They must maintain public AI use cases annually.

US future outlook

The U.S. federal government’s market-driven approach is intended to encourage rapid innovation and competitiveness in the world AI market. While other jurisdictions forge forward with comprehensive rules and requirements, like the EU AI Act, the U.S. has elected to leave the issues of systemic risk management to voluntary self-regulation. The practical impact of these different approaches will become clearer as industry practices evolve and as policymakers assess whether existing frameworks adequately address emerging challenges.


Full series overview

Articles in this series, co-sponsored by HCLTech, dive into the laws, policies, broader contextual history and developments relevant to AI governance across different jurisdictions. The selected jurisdictions are a small but important snapshot of distinct approaches to AI governance regulation in key global markets.

Each article provides a breakdown of the key sources and instruments that govern the strategic, technological and compliance landscape for AI governance in the jurisdiction through voluntary frameworks, sectoral initiatives or comprehensive legislative approaches.

Global AI Governance Law and Policy

Jurisdiction Overviews 2025

The overview page for this series can be accessed here.


Additional resources

  • expand_more



Approved
CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 2

Submit for CPEs