As with seemingly every aspect of artificial intelligence, legislative activity related to potential AI risks and harms has moved with unprecedented speed. Often it can take decades for policymakers to begin responding to new technologies with targeted laws. But after generative AI captured the world's attention, it took only a matter of months for U.S. state legislatures to consider responsive legislation. States are not waiting for federal action, instead adopting a remarkably active stance on regulations responding to concerns around many different types of AI systems and contexts.

Unpacking the themes of these legislative efforts is an ongoing undertaking. To help keep track of this rapidly changing landscape, the IAPP AI Governance Center released the first version of the IAPP US State AI Governance Legislation Tracker, which collates legislative activity and reflects emerging themes across AI governance policymaking at the U.S. state level. To accompany this new resource, this article provides a deeper explanation of our findings.

We are not the first to attempt to track state legislative efforts to govern AI systems. Our efforts build on more comprehensive trackers from other organizations, especially the Future of Privacy Forum, but also MultiState, Husch Blackwell, and CITRIS and the Banatao Institute.

The U.S. model of private sector AI governance

Across states, the policy response to AI concerns is varied and complex. Unlike with more established policy topics, such as consumer privacy, AI presents a wide range of challenges ranging across many domains with implications for seemingly every corner of existing legal practice.

In fact, this month Colorado became the first state to pass cross-sectoral private sector rules for certain high-risk AI systems.

This wave of cross-sectoral state legislative efforts began a few years ago and has continued into today. States first examined their own governmental approach to AI and began to conclude more efforts were needed to understand risks and benefits. Many states, including Indiana, Florida, Oregon, Tennessee and Washington, passed laws requiring ongoing studies or task forces to generate more actionable information about what should be done. Other states, including Maryland, New Hampshire and Virginia, passed laws requiring specific safeguards for their own state and local governmental use of AI systems or even banning certain high-risk governmental uses of AI altogether. Most legislatures across the country have considered similar proposals by now.

It was not until the recent generative AI revolution that legislators began turning with more vigor to questions about commercial and private sector AI guardrails. In the U.S. legal context, most of these efforts fall under the authority of legislative committees focused on consumer protection, and most of the proposed laws would modify the consumer protection section of the state's legal code. These are the laws tracked in our state legislative tracker.

What's in and what's out?

To provide clarity and focus, the tracker has been curated to include only legislative efforts that directly apply to private sector organizations. Government-only bills and laws have been deliberately omitted from our tracker. While it is undeniable that governmental regulations and other efforts like President Joe Biden's Executive Order 14110 will inevitably influence AI governance practices, our aim is to spotlight those legislative frameworks that create obligations on organizations in the private sector.

Furthermore, we have chosen to exclude bills that propose the establishment of a state AI task force or advisory council. Although these entities play a crucial role in shaping the future of AI policy, their creation does not directly impose new obligations or frameworks that private sector organizations must navigate. As such, they fall outside the scope of our current tracking efforts. That said, some commissions and task forces do have authority to make future recommendations on commercial AI uses.

Finally, despite the benchmark-setting importance of sectoral AI activity, we do not include sectoral bills and laws in the tracker. Laws and regulations in the employment sector, such as New York City's Local Law 144, were some of the first in the U.S. to establish obligations around the testing and deployment of AI systems. Other sectoral efforts are ongoing, such as the many proposals in the health care space. Their implications, while not captured in our tracker, are nonetheless vital to the broader discourse on AI governance. However, due to the limited scope of these rules, we instead focus our attention on laws that are broadly applicable to some types of AI systems.

Scoping 31 flavors of AI

It is a Herculean challenge to craft comprehensive AI governance legislation that covers cross-sectoral uses of AI systems, while ensuring required safeguards are tailored to meet a wide variety of risks across contexts. State definitions of covered systems run the gamut, reflecting much of the diversity seen in international definitions of AI.

Once generally defined, state AI proposals usually narrow the scope of application in some way, rather than requiring the same safeguards for all covered systems. For example, the most common flavor of AI legislation in 2024 has been bills focused on generative AI or synthetic content.

To reflect the diversity of covered systems without getting stuck in the weeds of each definition, the IAPP tracker includes a scope column to indicate the types of systems covered by provisions of each bill. Instead of or in addition to generative AI, some proposals are targeted to foundation models or frontier models. A few include provisions only applicable to AI systems trained on personal data.

Finally, many bills focus instead on automated decision-making systems, reflecting the most common approach to identifying high-risk AI under current U.S. laws, including in the privacy context.

Guardrails sorted, mischief managed

The IAPP tracker is meant to reflect the trends seen across U.S. state laws, while also being mindful of the broader context of emerging AI governance best practices. With this in mind, we have sorted obligations into broad themes across the varied state proposals and crafted a column to reflect whether each legislative proposal includes an obligation reflecting this theme. It is important to note the legislative text varies widely among proposals, so a check mark in the chart does not necessarily indicate an equivalent obligation in each box.

As obligations are generally split between developers and deployers, the chart reflects which of these organizations each relevant provision applies to. Of course, definitions of these entity types and their respective roles differ widely between laws as well. After using the chart as a general comparison tool, you are always best suited to read the law itself to understand its implications.

The examples excerpted below are selected as a single instance of each category. They are not necessarily representative of the entire category but are meant to illustrate one form it can take.

AI governance programs

The cornerstone of responsible AI management lies in the establishment of a comprehensive AI governance program. State laws may mandate organizations to develop and maintain robust policies and procedures that address the life cycle of AI systems. This can involve documenting the design, development and deployment processes, as well as maintaining records of risk assessments and mitigation actions.

Example provision: "A developer of a covered model shall periodically reevaluate the procedures, policies, protections, capabilities, and safeguards implemented pursuant to this section in light of the growing capabilities of covered models and as is reasonably necessary to ensure that the covered model or its users cannot remove or bypass those procedures, policies, protections, capabilities, and safeguards."

Assessments

Assessments at various stages in the AI life cycle are pivotal in identifying and mitigating risks. State provisions may require organizations to conduct various forms of assessments, such as risk assessments, impact assessments or rights assessments. The depth and scope of these assessments vary.

Example provision: "An impact assessment completed pursuant to this subsection (3) must include, at a minimum, and to the extent reasonably known by or available to the deployer:

  • a statement by the deployer disclosing the purpose, intended use cases, and deployment context; …
  • an analysis of whether the deployment of the high-risk artificial intelligence system poses any known or reasonably foreseeable risks; …
  • a description of the categories of data; …
  • if the deployer used data to customize the high-risk artificial intelligence system, an overview of the categories of data; …
  • any metrics used to evaluate the performance and known limitations; …
  • a description of any transparency measures; …
  • and a description of the post-deployment monitoring and user safeguards provided."

Training

Although not yet a common provision, state laws may stipulate training requirements for personnel involved in AI governance or end-users of automated systems.

Example provision: "Automated systems intended for use within sensitive domains, including but not limited to criminal justice, employment, education, and health, shall … include training for New York residents interacting with the system."

Responsible individual

As part of a holistic AI governance program, some states have considered requiring qualified and responsible individuals to be designated by covered organizations to be empowered and accountable for overseeing its AI systems.

Example provision: "A deployer shall designate at least one employee to be responsible for overseeing and maintaining the governance program and compliance with this Act. An employee designated under this subsection shall have the authority to assert to the employee's employer a good faith belief that the design, production, or use of an automated decision tool fails to comply with the requirements of this Act. An employer of an employee designated under this subsection shall conduct a prompt and complete assessment of any compliance issue raised by that employee."

General notice

Transparency is a fundamental principle in AI governance. Many state proposals include general notice provisions that would compel organizations to publicly disclose AI governance policies or provide general information about their use of covered AI systems. This would serve the dual purpose of enabling stakeholders to understand how AI is being used and governed within the organization, while also empowering consumer protection regulators.

Example provision: "A deployer or developer shall make publicly available, in a readily accessible manner, a clear policy that provides a summary of both of the following: (a) The types of automated decision tools currently in use or made available to others by the deployer or developer. (b) How the deployer or developer manages the reasonably foreseeable risks of algorithmic discrimination that may arise from the use of the automated decision tools it currently uses or makes available to others."

Labeling/notification

Additional, enhanced notice requirements may also be required. Labeling requirements are the most common type of provision in the generative AI context, requiring up-front disclosures of the use of such systems or their work products. Notification measures that alert individuals to the use of AI systems also fall into this broad category. The contents of a required notification may range from simple acknowledgments to detailed explanations of the AI's capabilities and risks.

Example provision: "(1) A deployer shall, at or before the time an automated decision tool is used to make a consequential decision, notify any natural person that is the subject of the consequential decision that an automated decision tool is being used. (2) A deployer shall provide to a natural person notified pursuant to this subdivision all of the following:

(A) A statement of the purpose of the automated decision tool.
(B) Contact information for the deployer.
(C) A plain language description of the automated decision tool. …
(D) Information sufficient to enable the natural person to request to be subject to an alternative selection process or accommodation, as applicable. …"

Explanation/incident reporting

Post-facto transparency obligations may also be required, such as explanations of an automated decision or incident reporting to state regulators or affected individuals. Though these two types of reports are quite distinct, they are each triggered by certain covered AI outcomes, and require post-facto actions.

Example provision: "If a deployer deploys a high-risk artificial intelligence system on or after February 1, 2026, and subsequently discovers that the high-risk artificial intelligence system has caused algorithmic discrimination, the deployer, without unreasonable delay, but no later than ninety days after the date of the discovery, shall send to the Attorney General, in a form and manner prescribed by the attorney general, a notice disclosing the discovery."

Provider documentation

The relationship and shared responsibility between AI developers and deployers is critical, and provider documentation requirements may necessitate specific disclosures from developers to deployers. This ensures deployers are fully informed about the AI systems they utilize and can govern them responsibly.

Example provision: "A developer shall provide a deployer with a statement regarding the intended uses of the automated decision tool and documentation regarding all of the following:

(1) The known limitations of the automated decision tool. …
(2) A description of the type of data used to program or train the automated decision tool.
(3) A description of how the automated decision tool was evaluated for validity and explainability before sale or licensing.
(4) A description of the deployer's responsibilities under this chapter."

Registration

Some state laws may introduce preemptive assurance requirements that would necessitate public statements or registration before developing or deploying covered systems. This category includes licensing, proactive predisclosure or registration with a government entity. This is distinct from internal documentation requirements, which may later be subject to review by enforcement agencies.

Example provision: "Within 60 days after completing an impact assessment required by this Act, a deployer shall provide the impact assessment to the Department of Human Rights."

Third-party review

External oversight can provide an additional layer of accountability and ensure consistency. Though not common in recently proposed U.S. rules, third-party review requirements may take the form of external assessments or audits of AI systems — such as testing for rates of bias — or third-party assessment of overall governance programs.

Example provision: "A generative AI provider may continue to make available a generative AI system … if … the provider is able to retroactively create and make publicly available a decoder that accurately determines whether a given piece of content was produced by the provider's system with at least 99 percent accuracy as measured by an independent auditor."

Opt-out/appeal

Respecting individual autonomy, state provisions may establish opt-out or appeal mechanisms. These give individuals the option to avoid AI-facilitated decisions or to challenge them if they believe the decisions are incorrect or unfair. The form and timeframes for these proposed mechanisms differ.

Example provision: "An opportunity to appeal an adverse consequential decision concerning the consumer arising from the deployment of a high-risk artificial intelligence system, which appeal must, if technically feasible, allow for human review unless providing the opportunity for appeal is not in the best interest of the consumer, including in instances in which any delay might pose a risk to the life or safety of such consumer."

Nondiscrimination

Finally, nondiscrimination requirements include those provisions that prohibit algorithmic bias as well as those that provide individual rights to be free from algorithmic discrimination. State laws may impose duties on organizations to avoid or mitigate discriminatory impacts of AI systems, or they may empower individuals to seek redress for such harms. Some provisions provide specific obligations to mitigate bias, while others are more general.

Example provision: "A developer of a high-risk artificial intelligence system shall use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of the high-risk artificial intelligence system."

Looking ahead to 2025

As the state legislative cycle comes to an end, with some states coming quite close to passing impactful AI governance requirements, this tracker represents a look back on the first major wave of legislative activity.

When the next cycle picks up, the IAPP anticipates updating this tracker routinely to provide up-to-date information about important regulatory developments in the state laboratories of democracy.

Finally, it is important to note, while both the proposed and passed legislative efforts indicate an increasing desire for additional rules, questions remain about what qualifies as an acceptable minimum standard for each of the identified categories of obligations. The IAPP AI Governance Center will continue to provide analysis on evolving best practices and standards as they emerge.