Resource Center / Resource Articles / Global AI Governance Law and Policy: US

Global AI Governance Law and Policy: US

This article is part of a five-part series co-sponsored by OneTrust. The full series can be accessed here.


Published: April 2024


Contributors:


Click to View (PDF)

Navigate by Topic

The U.S. lacks an omnibus federal law specifically targeted at the governance of AI. However, several executive orders to direct federal government policy and practice with respect to AI governance have been issued, catalyzing a series of agency regulations primarily related to government use of AI. Like the U.K., the U.S. established an AI Safety Institute, housed within the National Institute of Standards and Technology and aided by a consortium of over 200 AI stakeholders who support its mission. Numerous states have also proposed and, in some cases, enacted AI laws. Moreover, federal agencies, including the Federal Trade Commission, have made it clear their existing legal authorities apply to the use of new technologies, including AI.


History and context

The formal inception of AI as a field of academic research can be traced to Dartmouth College in Hanover, New Hampshire. In 1956, a group of scientists and mathematicians gathered for a summer workshop to test the idea that "every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it."

Several broad strategic drivers guide the U.S.'s approach to regulating AI at a national level. These include ensuring openness and competitiveness in the AI-driven economy, improving safety while mitigating risks and the proliferation of harm, and maintaining a competitive technological edge over China.

Tortoise Media's June 2023 Global AI Index ranked the U.S. first in the world for its implementation, innovation and investment in AI. Yet, its technology-related laws and policies lag. Indeed, Tortoise ranked the U.S.'s government strategy on AI at number eight. Now, U.S. lawmakers are working to craft legislative and regulatory regimes around emerging AI technologies in ways that maximize economic benefits while managing and mitigating the risks of harm.


Approach to regulation

The U.S.'s approach to regulating AI has consisted of two primary thrusts: the promulgation of guidelines and standards through federal agencies and industry self-regulation.

National AI Research and Development Strategic Plan

A key federal policy document is the National AI Research and Development Strategic Plan, which direct federal investments in AI-related research and development. First developed in 2016 and most recently updated in May 2023, this report by the National Science and Technology Council outlines a set of strategies to direct federal funding over the long- and short-term. Its goals and priorities include promoting responsible, safe and secure AI systems; fostering a better understanding of AI workforce needs; expanding public-private partnerships; and promoting international collaboration in AI.

Blueprint for an AI Bill of Rights

Issued in January 2021, the Blueprint for an AI Bill of Rights marked the Biden-Harris administration's first foray into setting the direction of national AI policy. Rather than a law or regulation imposing specific legal obligations, the blueprint is a "national values statement and toolkit." Namely, it articulates five principles to guide the design and deployment of automated systems: safe and effective systems; algorithmic discrimination protections; data privacy; notice and explanation; and human alternatives, consideration and fallback. While it did not establish any formal rules for AI systems, the blueprint served as a basis for further discussion of U.S. AI policy at the federal level.

Executive Order 14110

The next executive order on AI from the Biden-Harris administration was released October 2023. The Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI, Executive Order 14110, required over 150 actions to be taken by dozens of federal agencies. Building upon the five principles of the blueprint, Executive Order 14110 added promoting innovation and competition, supporting workers, advancing federal government use of AI, and strengthening American leadership abroad. In terms of its operational impact, the order directly applied to most federal agencies and those within the AI value chain that do business with the federal government.

Other federal agency guidelines

As mentioned above, Executive Order 14110 set numerous implementation milestones for federal agencies to achieve. A sample of how those and related initiatives have been enacted are listed below:

  • The NIST AI Safety Institute: Created as a companion resource to the AI Risk Management Framework in the wake of Executive Order 14110, the AI Safety Institute focuses on generative AI, authenticating and watermarking AI-generated content, and creating guidance and benchmarks for evaluating AI capabilities.
  • The Department of State's Enterprise AI Strategy established department-wide guidance for "the responsible and ethical design, development, acquisition and appropriate application of AI." The strategy lays out a series of measurable goals around how the department will leverage and integrate AI into its mission.
  • Based on the Department of Homeland Security's AI Roadmap, its AI Safety and Security Board was established to issue recommendations and best practices for critical infrastructure owners and operators to improve the security, resilience and incident response of AI systems.

Congress

While the U.S. lacks a comprehensive law designed to regulate AI, Congress has not been inactive on the AI front. Several bills, including the AI Training Act, the National AI Initiative Act of 2020 and the AI in Government Act of 2020, have been enacted. While these pieces of federal AI legislation have often been lesser components of larger appropriations bills, their scopes have mirrored executive branch actions and aimed to facilitate AI adoption within the federal government and achieve coordination among federal agencies with respect to their use of AI.

Through the Senate's AI Insight Forum and bipartisan framework on AI legislation, and the House of Representative's bipartisan Task Force on AI, members of Congress have continued to explore how the legislature should address the promises and challenges of AI. The proposals have ranged from establishing a licensing regime administered by an independent oversight body to holding AI companies liable for privacy and civil rights harms via enforcement and private rights of action to requiring AI developers to disclose information about the training data, limitations, accuracy and safety of their models.


Self-regulation

In line with the U.S.'s long history of favoring a self-regulatory approach to industry, informal commitments have been a key policy pool in its regulatory approach to AI. In July 2023, for example, Amazon, Google, Meta, Microsoft and several other AI companies convened at the White House and pledged their voluntary commitment to principles around the safety, security and trust of AI. These principles include ensuring products are safe before introducing them onto the market and prioritizing investments in cybersecurity and security-risk safeguards.

NIST's AI RMF

Perhaps the strongest example of the U.S.'s approach to AI regulation within the paradigm of industry self-regulation is the AI Risk Management Framework, which was released by the NIST within the Department of Commerce in January 2023. The AI RMF aims to serve as "a resource to the organizations designing, developing, deploying or using AI systems to help manage the many risks of AI." To facilitate implementation of the AI RMF, the NIST subsequently launched the Trustworthy and Responsible AI Resource Center, which provides operational resources, including a knowledge base, use cases, events and training.

NTIA's AI Accountability Policy

The National Telecommunications and Information Administration's Artificial Intelligence Accountability Policy also falls into the self-regulation category. The report provides guidance and recommendations for AI developers and deployers to establish, enhance and use accountability inputs to provide assurance to external stakeholders.


Wider regulatory environment

Given that AI use cases span the gamut of activity across federal agencies, oversight and collaboration have been coordinated through the National AI Initiative Office, which was launched by the passage of the National AI Act of 2020. The National AI Advisory Committee is tasked with advising the National AI Initiative Office and the president on AI-related topics.

  • expand_more

  • expand_more

  • expand_more


International cooperation on AI

The U.S. has been involved in numerous bilateral and multilateral efforts to advance international cooperation around AI policy, including with the EU and China. The TTC Joint Roadmap for Trustworthy AI and Risk Management aims to bridge the gap between EU and U.S. risk-based approaches to AI systems. With regard to cooperation with China around AI, a November 2023 meeting between President Joe Biden and General Secretary Xi Jinping led the two governments to announce creation of a new bilateral channel for talks on AI.


Latest developments

In the U.S., law and policy developments related to AI are in the acceleration phase. Here’s a limited preview of the most recent developments and what to expect over the next six to 18 months.

In early 2024

  • The White House's Office of Management and Budget released its policy on Advancing Governance, Innovation, and Risk Management for Agency Use of AI in March 2024. The policy directs federal agencies "to advance AI governance and innovation while managing risks from the use of AI in the Federal Government, particularly those affecting the rights and safety of the public."
  • Also in March 2024, the U.S. Department of the Treasury released a report on Managing AI-Specific Risks in the Financial Services Sector. Written under the auspices of Executive Order 14110, the report identified "significant opportunities and challenges that AI presents to the security and resiliency of the financial services sector." It also provides next steps for addressing AI-related operational risks, such as reducing the credibility gap, enhancing regulatory coordination and expanding the NIST AI RMF to include risk management related to the financial services sector.

In late 2024

  • The U.S. Copyright Office plans to issue a report based on over 10,000 comments it received in response to its August 2023 notice of inquiry.

In 2025

  • Within President Biden's fiscal year 2025 budget request, increased funding is allotted to support further implementing activities in response to Executive Order 14110. These include increased staffing or establishment of new AI offices within the Departments of Labor, Transportation and Homeland Security, as well as additional investments in the NIST AI Safety Institute and the National AI Research Resource within the National Science Foundation.

Additional resources


Global AI Governance Law and Policy: Jurisdiction Overviews

The overview page for this series can be accessed here. The full series is additionally available here in PDF format.



Approved
CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 2

Submit for CPEs