As with seemingly every aspect of artificial intelligence, legislative activity related to potential AI risks and harms has moved with unprecedented speed. Often it can take decades for policymakers to begin responding to new technologies with targeted laws. But after generative AI captured the world's attention, it took only a matter of months for U.S. state legislatures to consider responsive legislation. States are not waiting for federal action, instead adopting a remarkably active stance on regulations responding to concerns around many different types of AI systems and contexts.
Unpacking the themes of these legislative efforts is an ongoing undertaking. To help keep track of this rapidly changing landscape, the IAPP AI Governance Center released the first version of the IAPP US State AI Governance Legislation Tracker, which collates legislative activity and reflects emerging themes across AI governance policymaking at the U.S. state level. To accompany this new resource, this article provides a deeper explanation of our findings.
We are not the first to attempt to track state legislative efforts to govern AI systems. Our efforts build on more comprehensive trackers from other organizations, especially the Future of Privacy Forum, but also MultiState, Husch Blackwell, and CITRIS and the Banatao Institute.
The U.S. model of private sector AI governance
Across states, the policy response to AI concerns is varied and complex. Unlike with more established policy topics, such as consumer privacy, AI presents a wide range of challenges ranging across many domains with implications for seemingly every corner of existing legal practice.
In fact, this month Colorado became the first state to pass cross-sectoral private sector rules for certain high-risk AI systems.
This wave of cross-sectoral state legislative efforts began a few years ago and has continued into today. States first examined their own governmental approach to AI and began to conclude more efforts were needed to understand risks and benefits. Many states, including Indiana, Florida, Oregon, Tennessee and Washington, passed laws requiring ongoing studies or task forces to generate more actionable information about what should be done. Other states, including Maryland, New Hampshire and Virginia, passed laws requiring specific safeguards for their own state and local governmental use of AI systems or even banning certain high-risk governmental uses of AI altogether. Most legislatures across the country have considered similar proposals by now.
It was not until the recent generative AI revolution that legislators began turning with more vigor to questions about commercial and private sector AI guardrails. In the U.S. legal context, most of these efforts fall under the authority of legislative committees focused on consumer protection, and most of the proposed laws would modify the consumer protection section of the state's legal code. These are the laws tracked in our state legislative tracker.
What's in and what's out?
To provide clarity and focus, the tracker has been curated to include only legislative efforts that directly apply to private sector organizations. Government-only bills and laws have been deliberately omitted from our tracker. While it is undeniable that governmental regulations and other efforts like President Joe Biden's Local Law 144, were some of the first in the U.S. to establish obligations around the testing and deployment of AI systems. Other sectoral efforts are ongoing, such as the many proposals in the health care space. Their implications, while not captured in our tracker, are nonetheless vital to the broader discourse on AI governance. However, due to the limited scope of these rules, we instead focus our attention on laws that are broadly applicable to some types of AI systems.
Scoping 31 flavors of AI
It is a Herculean challenge to craft comprehensive AI governance legislation that covers cross-sectoral uses of AI systems, while ensuring required safeguards are tailored to meet a wide variety of risks across contexts. State definitions of covered systems run the gamut, reflecting much of the diversity seen in
Assessments
Assessments at various stages in the AI life cycle are pivotal in identifying and mitigating risks. State provisions may require organizations to conduct various forms of assessments, such as risk assessments, impact assessments or rights assessments. The depth and scope of these assessments vary.
Example provision: "An impact assessment completed pursuant to this subsection (3) must include, at a minimum, and to the extent reasonably known by or available to the deployer:
Training
Although not yet a common provision, state laws may stipulate training requirements for personnel involved in AI governance or end-users of automated systems.
Example provision: "Automated systems intended for use within sensitive domains, including but not limited to criminal justice, employment, education, and health, shall … include training for New York residents interacting with the system."
Responsible individual
As part of a holistic AI governance program, some states have considered requiring qualified and responsible individuals to be designated by covered organizations to be empowered and accountable for overseeing its AI systems.
Example provision: "A deployer shall designate at least one employee to be responsible for overseeing and maintaining the governance program and compliance with this Act. An employee designated under this subsection shall have the authority to assert to the employee's employer a good faith belief that the design, production, or use of an automated decision tool fails to comply with the requirements of this Act. An employer of an employee designated under this subsection shall conduct a prompt and complete assessment of any compliance issue raised by that employee."
General notice
Transparency is a fundamental principle in AI governance. Many state proposals include general notice provisions that would compel organizations to publicly disclose AI governance policies or provide general information about their use of covered AI systems. This would serve the dual purpose of enabling stakeholders to understand how AI is being used and governed within the organization, while also empowering consumer protection regulators.
Example provision: "A deployer or developer shall make publicly available, in a readily accessible manner, a clear policy that provides a summary of both of the following: (a) The types of automated decision tools currently in use or made available to others by the deployer or developer. (b) How the deployer or developer manages the reasonably foreseeable risks of algorithmic discrimination that may arise from the use of the automated decision tools it currently uses or makes available to others."
Labeling/notification
Additional, enhanced notice requirements may also be required. Labeling requirements are the most common type of provision in the generative AI context, requiring up-front disclosures of the use of such systems or their work products. Notification measures that alert individuals to the use of AI systems also fall into this broad category. The contents of a required notification may range from simple acknowledgments to detailed explanations of the AI's capabilities and risks.
Example provision: "(1) A deployer shall, at or before the time an automated decision tool is used to make a consequential decision, notify any natural person that is the subject of the consequential decision that an automated decision tool is being used. (2) A deployer shall provide to a natural person notified pursuant to this subdivision all of the following:
(A) A statement of the purpose of the automated decision tool.
(B) Contact information for the deployer.
(C) A plain language description of the automated decision tool. …
(D) Information sufficient to enable the natural person to request to be subject to an alternative selection process or accommodation, as applicable. …"
Explanation/incident reporting
Post-facto transparency obligations may also be required, such as explanations of an automated decision or incident reporting to state regulators or affected individuals. Though these two types of reports are quite distinct, they are each triggered by certain covered AI outcomes, and require post-facto actions.
Example provision: "If a deployer deploys a high-risk artificial intelligence system on or after February 1, 2026, and subsequently discovers that the high-risk artificial intelligence system has caused algorithmic discrimination, the deployer, without unreasonable delay, but no later than ninety days after the date of the discovery, shall send to the Attorney General, in a form and manner prescribed by the attorney general, a notice disclosing the discovery."
Provider documentation
The relationship and shared responsibility between AI developers and deployers is critical, and provider documentation requirements may necessitate specific disclosures from developers to deployers. This ensures deployers are fully informed about the AI systems they utilize and can govern them responsibly.
Example provision: "A developer shall provide a deployer with a statement regarding the intended uses of the automated decision tool and documentation regarding all of the following:
(1) The known limitations of the automated decision tool. …
(2) A description of the type of data used to program or train the automated decision tool.
(3) A description of how the automated decision tool was evaluated for validity and explainability before sale or licensing.
(4) A description of the deployer's responsibilities under this chapter."
Registration
Some state laws may introduce preemptive assurance requirements that would necessitate public statements or registration before developing or deploying covered systems. This category includes licensing, proactive predisclosure or registration with a government entity. This is distinct from internal documentation requirements, which may later be subject to review by enforcement agencies.
Example provision: "Within 60 days after completing an impact assessment required by this Act, a deployer shall provide the impact assessment to the Department of Human Rights."
Third-party review
External oversight can provide an additional layer of accountability and ensure consistency. Though not common in recently proposed U.S. rules, third-party review requirements may take the form of external assessments or audits of AI systems — such as testing for rates of bias — or third-party assessment of overall governance programs.
Example provision: "A generative AI provider may continue to make available a generative AI system … if … the provider is able to retroactively create and make publicly available a decoder that accurately determines whether a given piece of content was produced by the provider's system with at least 99 percent accuracy as measured by an independent auditor."
Opt-out/appeal
Respecting individual autonomy, state provisions may establish opt-out or appeal mechanisms. These give individuals the option to avoid AI-facilitated decisions or to challenge them if they believe the decisions are incorrect or unfair. The form and timeframes for these proposed mechanisms differ.
Example provision: "An opportunity to appeal an adverse consequential decision concerning the consumer arising from the deployment of a high-risk artificial intelligence system, which appeal must, if technically feasible, allow for human review unless providing the opportunity for appeal is not in the best interest of the consumer, including in instances in which any delay might pose a risk to the life or safety of such consumer."
Nondiscrimination
Finally, nondiscrimination requirements include those provisions that prohibit algorithmic bias as well as those that provide individual rights to be free from algorithmic discrimination. State laws may impose duties on organizations to avoid or mitigate discriminatory impacts of AI systems, or they may empower individuals to seek redress for such harms. Some provisions provide specific obligations to mitigate bias, while others are more general.
Example provision: "A developer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of the high-risk artificial intelligence system."
As the state legislative cycle comes to an end, with some states coming quite close to passing impactful AI governance requirements, this tracker represents a look back on the first major wave of legislative activity.
When the next cycle picks up, the IAPP anticipates updating this tracker routinely to provide up-to-date information about important regulatory developments in the state laboratories of democracy.
Finally, it is important to note, while both the proposed and passed legislative efforts indicate an increasing desire for additional rules, questions remain about what qualifies as an acceptable minimum standard for each of the identified categories of obligations. The IAPP AI Governance Center will continue to provide analysis on evolving best practices and standards as they emerge.