Resource Center / Resource Articles / US federal AI governance: Laws, policies and strategies
US federal AI governance: Laws, policies and strategies
This article provides a breakdown of artificial intelligence governance at the federal level, including the White House, Congress and federal agencies.
Last updated: November 2023
Contributor:
Navigate by Topic
Halfway into 2023, generative artificial intelligence tools such as OpenAI's ChatGPT have achieved growing and sustained popularity. In May, chat.openai.com received about 1.8 billion visits over the previous month, with an average visit duration of eight and a half minutes.
Yet, as AI is adopted around the world, it raises as many questions as it provides answers. Chief among these questions is: How should AI be governed?
AI governance around the world
With AI making inroads into every sphere of life, lawmakers and regulators are working to regulate the technology in ways that appreciate its full range of potential effects — both the benefits and the harms. Unsurprisingly, countries have taken differing approaches to AI, each reflective of their respective legal systems, cultures and traditions.
On 11 May, European Parliament voted in favor of adopting the Artificial Intelligence Act, which, in its current form, bans or limits specific high-risk applications of AI. The law is now set for plenary adoption in June, which will trigger trilogue negotiations between Parliament, the European Commission and the Council of the European Union.
In the U.K., Secretary of State for Science, Innovation and Technology Michelle Donelan recently released a white paper, aiming to establish the U.K. as an "AI superpower." The strategy provides a framework for identifying and addressing risks presented by AI while taking a "proportionate" and "pro-innovation" approach.
In Canada, the proposed Artificial Intelligence and Data Act is part of a broader update to the country's information privacy laws, and is one of three pieces of legislation that comprise Bill C-27, which passed its second reading in the House of Commons in April.
Singapore's National AI Strategy, meanwhile, consists of the 2019 launch of its Model AI Governance Framework, its companion Implementation and Self-Assessment Guide for Organizations and Compendium of Use Cases, which highlights practical examples of organizational-level AI governance.
And, on 11 April, the Cyberspace Administration of China released its draft Administrative Measures for Generative Artificial Intelligence Services, which aim to ensure content created by generative AI is consistent with "social order and societal morals," avoids discrimination, is accurate and respects intellectual property.
AI governance policy at the White House
Within the context of these global developments in AI law and policymaking, a federal AI governance policy has also taken shape in the U.S. The White House, Congress, and a range of federal agencies, including the Federal Trade Commission, the Consumer Financial Protection Bureau and the National Institute of Standards and Technology, have put forth a series of AI-related initiatives, laws and policies. While numerous city and state AI laws also came into effect over the years, federal laws and policies around AI are of heightened importance in understanding the country's unique national AI strategy. Indeed, the foundation of the federal government's AI strategy has already been established and provides insight into how the legal and policy questions brought about by this new technology will be approached in the months and years ahead.
-
expand_more
Obama administration
The earliest outlines of a federal AI strategy were sketched during former President Barack Obama's administration, most directly in "Preparing for the Future of Artificial Intelligence," a public report issued by the National Science and Technology Council in October 2016. It summarizes the state of AI within the federal government and economy at the time, while touching on issues of fairness, safety, governance and global security. Its nonbinding recommendations centered around applying AI to address "broad social problems," releasing government data sets in pursuit of open training data and open data standards, drawing on "appropriate technical expertise … when setting regulatory policy for AI-enabled products" and fostering a federal workforce with diverse perspectives on AI technology. The report built on three previous White House reports, from 2014, 2015, and 2016, on big data and algorithmic systems.
Released a day later in conjunction with the report, the National Artificial Intelligence Research and Development Strategic Plan sought to identify priority areas for federally funded AI research, "with particular attention on areas that industry is unlikely to address." It urged the federal government to "emphasize AI investment in areas of strong societal importance that are not aimed at consumer markets — areas such as AI for public health, urban systems and smart communities, social welfare, criminal justice, environmental sustainability, and national security."
Updates to the National AI R&D Strategic Plan, which occurred in 2019 and 2023, reaffirmed the seven core strategies laid out in 2016 and added two new ones focused on expanding public-private partnerships and international collaboration.
-
expand_more
Trump administration
Another significant development in federal AI governance policy occurred when former President Donald Trump signed Executive Order 13859, "Maintaining American Leadership in Artificial Intelligence," in February 2019. Executive Order 13859 set in motion the American AI Initiative, which led to the issuance of further guidance and technical standards that would determine the scope of AI law and policymaking in the U.S. over the following years.
Among other things, the order required former Director of the Office of Management and Budget Russell Vought to issue a guidance memorandum, following public consultation, in November 2020. The purpose of the OMB guidance was to help inform federal agencies' development of approaches to AI that "consider ways to reduce barriers to the use of AI technologies in order to promote their innovative application while protecting civil liberties, privacy, American values, and United States economic and national security." The tone of the OMB guidance was described in Lexology as "fairly permissive" for warning agencies to "avoid a precautionary approach that holds AI systems to an impossibly high standard."
In September 2019, the White House also hosted The Summit on Artificial Intelligence in Government, which aimed to generate ideas for the adoption of AI by the federal government. The summit's key takeaways revolved around sharing best practices between government, industry and academia; fostering collaboration through an AI center of excellence model; and training/reskilling the federal workforce in the use of AI.
-
expand_more
Biden administration
AI governance policy in the U.S. evolved further during President Joe Biden's administration. Indeed, another milestone in federal AI governance policy came in October 2022 with the release of the Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People. Published by the White House Office of Science and Technology Policy, the document lays out five principles to "guide the design, use, and deployment of automated systems to protect the American public in the age of" AI. These principles revolve around safety and effectiveness, protection against algorithmic discrimination, data privacy, notice and explanation, and human involvement in decision-making. The white paper provides supplemental sections regarding why each principle is important, what should be expected of automated systems with regards to them and how they can be embedded into laws, policies and practices.
Also, in February of this year, President Biden signed the Executive Order on Further Advancing Racial Equity and Support for Underserved Communities Through The Federal Government, which "directs federal agencies to root out bias in their design and use of new technologies, including AI, and to protect the public from algorithmic discrimination."
More recently, in late May 2023, the Biden administration took several additional steps to further delineate its approach to AI governance. The White House OSTP issued a revised National AI R&D Strategic Plan to "coordinate and focus federal R&D investments" in AI. OSTP also issued a Request for Information seeking input on "mitigating AI risks, protecting individuals' rights and safety, and harnessing AI to improve lives," with comments due by 7 July.
On October 30, 2023, President Biden signed Executive Order 14110, also known as the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. With the objective of "harnessing AI for good and realizing its myriad benefits" while "mitigating its substantial risks," the order revolved around eight policy areas: safety and security; innovation and competition, worker support, AI bias and civil rights, consumer protection, privacy, federal use of AI, and international leadership. It charged over 50 federal agencies with more than 100 specific tasks to execute (usually in less than a year) and created a White House Artificial Intelligence Council composed of the heads of 28 federal departments and agencies to coordinate its implementation. As detailed in a report from the IAPP AI Governance Center, the order 'serves as a watershed moment in its articulation of the types of rules under consideration and its setting of guideposts for the development of best practices" regarding the use of AI.
Following the issuance of EO 14110, the Office of Management and Budget subsequently released a memorandum for public comment on Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence. While taking a risk-based approach to manage AI harms, the draft guidance would direct federal departments and agencies to, among other things, designate a chief AI officer, develop an agency AI strategy and follow certain minimum practices when using rights- and safety-impacting AI.
AI governance policy in Congress
The deliberative branch of government, Congress, has approached AI law and policymaking in its characteristically incremental fashion. Until 2019, most of lawmakers' attention around AI was absorbed by autonomous or self-driving vehicles and concerns about AI applications within the national security arena.
For example, in the 115th Congress in 2017-2019, Section 238 of the John S. McCain National Defense Authorization Act for Fiscal Year 2019 directed the Department of Defense to undertake various AI-related activities, including the appointment of a coordinator to oversee activities in the realm. This Act also codified (at 10 U.S.C. § 2358) a definition of AI, which is:
- "Any artificial system that performs tasks under varying and unpredictable circumstances without significant human oversight, or that can learn from experience and improve performance when exposed to data sets.
- An artificial system developed in computer software, physical hardware, or another context that solves tasks requiring human-like perception, cognition, planning, learning, communication, or physical action.
- An artificial system designed to think or act like a human, including cognitive architectures and neural networks.
- A set of techniques, including machine learning, that is designed to approximate a cognitive task.
- An artificial system designed to act rationally, including an intelligent software agent or embodied robot that achieves goals using perception, planning, reasoning, learning, communicating, decision-making, and acting."
Another key AI-related legislative development occurred when the National AI Initiative Act of 2020 became law on 1 Jan. 2021. Included as part of the William M. (Mac) Thornberry National Defense Authorization Act for Fiscal Year 2021, this legislation focused on expanding AI research and development and further coordinating AI R&D activities between the defense/intelligence communities and civilian federal agencies. The Act also legislated the creation of the National Artificial Intelligence Initiative Office, which sits within the White House OSTP and is tasked with "overseeing and implementing the U.S. national AI strategy."
Congress has also amended existing laws and policies to account for the increasing use of AI in various arenas. For example, in passing the FAA Reauthorization Act of 2018, Congress added language to advise the Federal Aviation Administration to periodically review the state of AI in aviation and for it to take necessary steps to address new developments. The Advancing American AI Act and the AI Training Act were among other AI-related pieces of legislation introduced or passed by the 117th Congress.
-
expand_more
Recently proposed legislation related to AI
Within the current 118th Congress other bills have also been proposed to amend existing laws and better equip them for the AI era. Proposed in May 2023, HR 3044 would amend the Federal Election Campaign Act of 1971 to provide transparency and accountability around the use of generative AI in political advertisements. Also, in January, House resolution 66 was introduced, expressing support for Congress to focus more on AI. The stated goal of the resolution was to "ensure that the development and deployment of AI is done in a way that is safe, ethical, and respects the rights and privacy of all Americans, and that the benefits of AI are widely distributed, and the risks are minimized." Other federal privacy bills have also sought to regulate various uses of AI. The Stop Spying Bosses Act would prohibit employers from engaging in workplace surveillance using automated decision systems, including ML and AI techniques, to predict the behavior of their workers.
Many recently proposed federal privacy bills are also already cognizant of AI. The definition of a "covered algorithm" within the American Data Privacy and Protection Act, for example, includes computational processes that use ML, natural language processing or AI techniques. Among other proposed rules, the most recent version of the ADPPA requires impact assessments of such systems if certain entities use them "in a manner that poses a consequential risk of harm to an individual or group of individuals." Separately, it would require the documentation of an "algorithm design evaluation" process to mitigate risks whenever a covered entity develops a covered algorithm "solely or in part, to collect, process, or transfer covered data in furtherance of a consequential decision."
Similarly, the Filter Bubble Transparency Act would apply to platforms that use "algorithmic ranking systems," which includes computational processes "derived from" AI. In addition, the SAFE DATA Act includes both above-mentioned definitions. Lastly, the Consumer Online Privacy Rights Act would also regulate "algorithmic decision-making" defined similarly to include computational processes derived from AI. Moving forward, comprehensive federal privacy bills may also become more explicit in their treatment of AI. Moreover, bills drafted in previous sessions may be reintroduced and further amended to account for the risks/opportunities presented by AI.
Several congressional hearings on AI have also recently been held. Both the House Armed Services' Subcommittee on Cyber, Information Technologies, and Innovation, and the Senate Armed Services Subcommittee on Cybersecurity met in March and April, respectively, to discuss AI and ML applications to improve DOD operations. On 16 May, the Senate Judiciary Subcommittee on Privacy, Technology and the Law held a hearing titled "Oversight of A.I.: Rules for Artificial Intelligence," while the Senate Committee on Homeland Security and Governmental Affairs held a full committee meeting, "Artificial Intelligence in Government," the same day.
AI governance policy within federal agencies
Virtually every federal agency has played an active role in advancing the AI governance strategy within the federal government and, to a lesser extent, around commercial activities. One of the first to do so was the NIST, which published "U.S. LEADERSHIP IN AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools" in August 2019 in response to EO 13859. The report identified areas of focus for AI standards and laid out a series of recommendations for advancing national AI standards development in the U.S. NIST's AI Risk Management Framework, released in January, also serves as an important pillar of federal AI governance and is an oft-cited model for private sector activities.
By mid-2020, the FTC entered the picture to provide the contours of its approach to AI governance, regulation and enforcement. Its guidance has emphasized the agency's focus on companies' use of generative AI tools. Questions about whether firms are using generative AI in a way that, "deliberately or not, steers people unfairly or deceptively into harmful decisions in areas such as finances, health, education, housing, and employment," fall within the FTC's jurisdiction.
In late April, the FTC, along with the CFPB, the Justice Department's Civil Rights Division and the Equal Employment Opportunity Commission, issued a joint statement, clarifying that their enforcement authorities apply to automated systems, which they define as "software and algorithmic processes, including AI, that are used to automate workflows and help people complete tasks or make decisions." In line with these promises, the EEOC also released a bulletin around its interpretation of existing antidiscrimination rules in employment, specifically, Title VII of the Civil Rights Act of 1964, as they apply to the use of AI-powered systems.
Meanwhile, the National Telecommunications and Information Administration has issued an "AI Accountability Policy Request for Comment," seeking public feedback on policies to "support the development of AI audits, assessments, certifications and other mechanisms to create earned trust in AI systems," with written responses due 12 June. The NTIA will likely use the information it receives to advise the White House on AI governance policy issues.
Numerous other U.S. agencies have led their own AI initiatives and created AI-focused offices within their departments. For example, the Department of Energy's AI Intelligence and Technology Office developed an AI Risk Management Playbook in consultation with NIST and established an AI Advancement Council in April 2022. Within the Department of Commerce, the U.S. Patent and Trademark Office created an AI/emerging technologies partnership to examine and better understand use of these technologies in patent and trademark examination and their effect on intellectual property.
More recently, the U.S. Department of Education Office of Educational Technology released a report on the risks and opportunities AI presents within and educational settings.
AI governance policy and existing laws
A key point emphasized by U.S. regulators across multiple federal agencies is that current laws do apply to AI technology. Indeed, at least in the short term, AI regulation in the U.S. will consist more of figuring out how existing laws apply to AI technologies, rather than passing and applying new, AI-specific laws. In their joint statement, the FTC, EEOC, CFPB and Department of Justice noted how "existing legal authorities apply to the use of automated systems and innovative new technologies just as they apply to other practices." Expressing concern about "potentially harmful uses of automated systems," the agencies emphasized that they would work "to ensure that these rapidly evolving automated systems are developed and used in a manner consistent with federal laws."
On numerous occasions, the FTC stated the prohibition of unfair or deceptive practices in Section 5 of the FTC Act applies to the use of AI and ML systems. In its business guidance on using AI and algorithms, the FTC explained the Fair Credit Reporting Act of 1970 and the Equal Credit Opportunity Act of 1974 "both address automated decision-making, and financial services companies have been applying these laws to machine-based credit underwriting models for decades."
Separately, the CFPB issued a circular clarifying the adverse action notice requirement of the Equal Credit Opportunity Act and its implementing Regulation B, requiring creditors to explain the specific reasons why an adverse credit decision was taken against an individual, still applies even if the credit decision is based on a so-called "uninterpretable or 'black-box' model." Such complex algorithm models may make it difficult — or even impossible — to accurately identify the specific reason for denial of credit. Yet, as the CFPB further noted, creditors cannot rely on post-hoc explanation methods and they must be able to "validate the accuracy" of any approximate explanations they provide. Thus, this guidance interprets the ECOA and Regulation B as not permitting creditors to use complex algorithms to make credit decisions "when doing so means they cannot provide the specific and accurate reasons for adverse actions." In his keynote address at the IAPP Global Privacy Summit 2023, FTC Commissioner Alvaro Bedoya echoed this point, explaining the FTC "has historically not responded well to the idea that a company is not responsible for their product because that product is a black box that was unintelligible or difficult to test."
Conclusion
Around the world, and particularly in the U.S., the most pressing questions around AI governance concern the applicability of existing laws to the new technology. Answering these questions will be a difficult task involving significant legal and technological complexities. Indeed, as the Business Law Section of the American Bar Association explained in its inaugural Chapter on Artificial Intelligence, "Companies, counsel, and the courts will, at times, struggle to grasp technical concepts and apply existing law in a uniform way to resolve business disputes."
Pro-social applications of AI abound, from achieving greater accuracy than human radiologists in breast cancer detection to mitigating climate change. Yet, anti-social applications of AI are no less numerous, from aiding child predators in avoiding detection to facilitating financial scams.
AI can be neither responsible nor irresponsible in and of itself. Rather, it can be used or deployed — by people and organizations — in responsible and irresponsible ways. It is up to lawmakers to determine what those uses are, how to support the responsible ones and how to prohibit the irresponsible ones, while professionals who create and use AI work to implement these governance principles into their daily practices.
Additional resources
-
expand_more
Artificial intelligence resources
-
expand_more
US federal privacy resources