Resource Center / Resource Articles / US federal AI governance: Laws, policies and strategies

US federal AI governance: Laws, policies and strategies

This article provides a breakdown of artificial intelligence governance at the federal level, including the White House, Congress and federal agencies.


Last updated: November 2023


Contributor:


Navigate by Topic

Halfway into 2023, generative artificial intelligence tools such as OpenAI's ChatGPT have achieved growing and sustained popularity. In May, chat.openai.com received about 1.8 billion visits over the previous month, with an average visit duration of eight and a half minutes.

Yet, as AI is adopted around the world, it raises as many questions as it provides answers. Chief among these questions is: How should AI be governed?


AI governance around the world

With AI making inroads into every sphere of life, lawmakers and regulators are working to regulate the technology in ways that appreciate its full range of potential effects — both the benefits and the harms. Unsurprisingly, countries have taken differing approaches to AI, each reflective of their respective legal systems, cultures and traditions.

On 11 May, European Parliament voted in favor of adopting the Artificial Intelligence Act, which, in its current form, bans or limits specific high-risk applications of AI. The law is now set for plenary adoption in June, which will trigger trilogue negotiations between Parliament, the European Commission and the Council of the European Union.

In the U.K., Secretary of State for Science, Innovation and Technology Michelle Donelan recently released a white paper, aiming to establish the U.K. as an "AI superpower." The strategy provides a framework for identifying and addressing risks presented by AI while taking a "proportionate" and "pro-innovation" approach.

In Canada, the proposed Artificial Intelligence and Data Act is part of a broader update to the country's information privacy laws, and is one of three pieces of legislation that comprise Bill C-27, which passed its second reading in the House of Commons in April.

Singapore's National AI Strategy, meanwhile, consists of the 2019 launch of its Model AI Governance Framework, its companion Implementation and Self-Assessment Guide for Organizations and Compendium of Use Cases, which highlights practical examples of organizational-level AI governance.

And, on 11 April, the Cyberspace Administration of China released its draft Administrative Measures for Generative Artificial Intelligence Services, which aim to ensure content created by generative AI is consistent with "social order and societal morals," avoids discrimination, is accurate and respects intellectual property.


AI governance policy at the White House

Within the context of these global developments in AI law and policymaking, a federal AI governance policy has also taken shape in the U.S. The White House, Congress, and a range of federal agencies, including the Federal Trade Commission, the Consumer Financial Protection Bureau and the National Institute of Standards and Technology, have put forth a series of AI-related initiatives, laws and policies. While numerous city and state AI laws also came into effect over the years, federal laws and policies around AI are of heightened importance in understanding the country's unique national AI strategy. Indeed, the foundation of the federal government's AI strategy has already been established and provides insight into how the legal and policy questions brought about by this new technology will be approached in the months and years ahead.

  • expand_more

  • expand_more

  • expand_more


AI governance policy in Congress

The deliberative branch of government, Congress, has approached AI law and policymaking in its characteristically incremental fashion. Until 2019, most of lawmakers' attention around AI was absorbed by autonomous or self-driving vehicles and concerns about AI applications within the national security arena.

For example, in the 115th Congress in 2017-2019, Section 238 of the John S. McCain National Defense Authorization Act for Fiscal Year 2019 directed the Department of Defense to undertake various AI-related activities, including the appointment of a coordinator to oversee activities in the realm. This Act also codified (at 10 U.S.C. § 2358) a definition of AI, which is:

  • "Any artificial system that performs tasks under varying and unpredictable circumstances without significant human oversight, or that can learn from experience and improve performance when exposed to data sets.
  • An artificial system developed in computer software, physical hardware, or another context that solves tasks requiring human-like perception, cognition, planning, learning, communication, or physical action.
  • An artificial system designed to think or act like a human, including cognitive architectures and neural networks.
  • A set of techniques, including machine learning, that is designed to approximate a cognitive task.
  • An artificial system designed to act rationally, including an intelligent software agent or embodied robot that achieves goals using perception, planning, reasoning, learning, communicating, decision-making, and acting."

Another key AI-related legislative development occurred when the National AI Initiative Act of 2020 became law on 1 Jan. 2021. Included as part of the William M. (Mac) Thornberry National Defense Authorization Act for Fiscal Year 2021, this legislation focused on expanding AI research and development and further coordinating AI R&D activities between the defense/intelligence communities and civilian federal agencies. The Act also legislated the creation of the National Artificial Intelligence Initiative Office, which sits within the White House OSTP and is tasked with "overseeing and implementing the U.S. national AI strategy."

Congress has also amended existing laws and policies to account for the increasing use of AI in various arenas. For example, in passing the FAA Reauthorization Act of 2018, Congress added language to advise the Federal Aviation Administration to periodically review the state of AI in aviation and for it to take necessary steps to address new developments. The Advancing American AI Act and the AI Training Act were among other AI-related pieces of legislation introduced or passed by the 117th Congress.

  • expand_more


AI governance policy within federal agencies

Virtually every federal agency has played an active role in advancing the AI governance strategy within the federal government and, to a lesser extent, around commercial activities. One of the first to do so was the NIST, which published "U.S. LEADERSHIP IN AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools" in August 2019 in response to EO 13859. The report identified areas of focus for AI standards and laid out a series of recommendations for advancing national AI standards development in the U.S. NIST's AI Risk Management Framework, released in January, also serves as an important pillar of federal AI governance and is an oft-cited model for private sector activities.

By mid-2020, the FTC entered the picture to provide the contours of its approach to AI governance, regulation and enforcement. Its guidance has emphasized the agency's focus on companies' use of generative AI tools. Questions about whether firms are using generative AI in a way that, "deliberately or not, steers people unfairly or deceptively into harmful decisions in areas such as finances, health, education, housing, and employment," fall within the FTC's jurisdiction.

In late April, the FTC, along with the CFPB, the Justice Department's Civil Rights Division and the Equal Employment Opportunity Commission, issued a joint statement, clarifying that their enforcement authorities apply to automated systems, which they define as "software and algorithmic processes, including AI, that are used to automate workflows and help people complete tasks or make decisions." In line with these promises, the EEOC also released a bulletin around its interpretation of existing antidiscrimination rules in employment, specifically, Title VII of the Civil Rights Act of 1964, as they apply to the use of AI-powered systems.

Meanwhile, the National Telecommunications and Information Administration has issued an "AI Accountability Policy Request for Comment," seeking public feedback on policies to "support the development of AI audits, assessments, certifications and other mechanisms to create earned trust in AI systems," with written responses due 12 June. The NTIA will likely use the information it receives to advise the White House on AI governance policy issues.

Numerous other U.S. agencies have led their own AI initiatives and created AI-focused offices within their departments. For example, the Department of Energy's AI Intelligence and Technology Office developed an AI Risk Management Playbook in consultation with NIST and established an AI Advancement Council in April 2022. Within the Department of Commerce, the U.S. Patent and Trademark Office created an AI/emerging technologies partnership to examine and better understand use of these technologies in patent and trademark examination and their effect on intellectual property.

More recently, the U.S. Department of Education Office of Educational Technology released a report on the risks and opportunities AI presents within and educational settings.


AI governance policy and existing laws

A key point emphasized by U.S. regulators across multiple federal agencies is that current laws do apply to AI technology. Indeed, at least in the short term, AI regulation in the U.S. will consist more of figuring out how existing laws apply to AI technologies, rather than passing and applying new, AI-specific laws. In their joint statement, the FTC, EEOC, CFPB and Department of Justice noted how "existing legal authorities apply to the use of automated systems and innovative new technologies just as they apply to other practices." Expressing concern about "potentially harmful uses of automated systems," the agencies emphasized that they would work "to ensure that these rapidly evolving automated systems are developed and used in a manner consistent with federal laws."

On numerous occasions, the FTC stated the prohibition of unfair or deceptive practices in Section 5 of the FTC Act applies to the use of AI and ML systems. In its business guidance on using AI and algorithms, the FTC explained the Fair Credit Reporting Act of 1970 and the Equal Credit Opportunity Act of 1974 "both address automated decision-making, and financial services companies have been applying these laws to machine-based credit underwriting models for decades."

Separately, the CFPB issued a circular clarifying the adverse action notice requirement of the Equal Credit Opportunity Act and its implementing Regulation B, requiring creditors to explain the specific reasons why an adverse credit decision was taken against an individual, still applies even if the credit decision is based on a so-called "uninterpretable or 'black-box' model." Such complex algorithm models may make it difficult — or even impossible — to accurately identify the specific reason for denial of credit. Yet, as the CFPB further noted, creditors cannot rely on post-hoc explanation methods and they must be able to "validate the accuracy" of any approximate explanations they provide. Thus, this guidance interprets the ECOA and Regulation B as not permitting creditors to use complex algorithms to make credit decisions "when doing so means they cannot provide the specific and accurate reasons for adverse actions." In his keynote address at the IAPP Global Privacy Summit 2023, FTC Commissioner Alvaro Bedoya echoed this point, explaining the FTC "has historically not responded well to the idea that a company is not responsible for their product because that product is a black box that was unintelligible or difficult to test."


Conclusion

Around the world, and particularly in the U.S., the most pressing questions around AI governance concern the applicability of existing laws to the new technology. Answering these questions will be a difficult task involving significant legal and technological complexities. Indeed, as the Business Law Section of the American Bar Association explained in its inaugural Chapter on Artificial Intelligence, "Companies, counsel, and the courts will, at times, struggle to grasp technical concepts and apply existing law in a uniform way to resolve business disputes."

Pro-social applications of AI abound, from achieving greater accuracy than human radiologists in breast cancer detection to mitigating climate change. Yet, anti-social applications of AI are no less numerous, from aiding child predators in avoiding detection to facilitating financial scams.

AI can be neither responsible nor irresponsible in and of itself. Rather, it can be used or deployed — by people and organizations — in responsible and irresponsible ways. It is up to lawmakers to determine what those uses are, how to support the responsible ones and how to prohibit the irresponsible ones, while professionals who create and use AI work to implement these governance principles into their daily practices.


Additional resources

  • expand_more

  • expand_more



Approved
CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 2

Submit for CPEs