Five AI trends in the 2026 US state legislative session

AI‑targeted regulation at the state level shows no sign of slowing. Some common themes have emerged as lawmakers seek to set rules to address issues, such as concrete risks to children's mental health, impactful automated decisions, to concerns about worst-case scenario catastrophes.

Published:
Contributors:
David Botero
Westin Fellow
IAPP
Cobun Zweifel-Keegan
CIPP/US, CIPM
Managing Director, D.C.
IAPP
The newest legislative sessions in the U.S. have been the busiest yet when it comes to artificial intelligence governance proposals. Without concerted public action toward regulation at the federal level, state legislatures are taking center stage. In fact, these attempts are occurring even while the largely deregulatory AI executive order issued by President Donald Trump at the end of 2025 looms, including its creation of an AI litigation task force within the U.S. Department of Justice with a mandate to target state AI laws.
At the macro level, state legislators are introducing more AI bills than ever before. Zooming in, we see a few new trends this year. In lieu of an omnibus approach, model-specific regulation has become more popular with policymakers. So far, this has been most apparent in bills focused on chatbots, health-related systems and algorithmic pricing. Policymakers have also doubled down on scrutiny over frontier AI models after the first such bills were recently signed into law.
The IAPP’s updated U.S. State AI Governance Legislation Tracker presents a list of cross-sectoral AI governance bills that apply to the private sector. While the tracker does not capture all the above trends since many are sector-specific, the IAPP continues to track the emergence of best practices around transparency, governance and assurance for deployers and developers, as well as the emergence of AI-specific rights for individual consumers.
There are five broad themes worth understanding in the current legislative cycle.
1. Legislatures diversify their approach and narrow their focus
AI regulation has become a central point of the tech policy agenda in nearly every state with more bills being introduced each year. In 2023, the National Conference of State Legislatures estimated that 86 private-sector focused bills were introduced in at least 25 states, Puerto Rico and the District of Columbia. By 2025, this number increased substantially with the NCSL tracking 589 private sector AI bills being introduced in all 50 states; Washington, D.C.; Puerto Rico and the U.S. Virgin Islands. So far this year, some estimates say that by 1 Feb. at least 240 bills had already been introduced, adding to many bills held over from 2025 and showing that 2026 is on track to outpace prior years.
The explosion of regulation is not limited to the raw number of bills introduced, as topical expansion has also been on state policymakers’ agendas. Though already somewhat rare in the prior term, cross-sectoral, omnibus proposals along the lines of the EU AI Act — or even more basic multipronged transparency and accountability laws like the Texas Responsible AI Governance Act — are even harder to find in 2026. Instead, states are delving into targeted solutions for a widening range of discrete harms caused by AI systems.
This means a wider variety of sectoral legislation, such as separate bills covering the use of AI systems in health care, mental health and employment contexts.
It also means there is continued growth in the number of bills relating only to specific types of AI systems, with targeted guardrails for generative AI, chatbots, frontier AI models, automated decision-making technologies and pricing algorithms each under consideration in at least a dozen states. There is also a new level of unpredictability in the combinations of proposals that may be included in proposed legislation this term, with legislators seeming to pick and choose from this smorgasbord of AI risks when crafting a bill. Vermont’s H.341, for example, would create guardrails that apply to the types of AI risks often covered separately in foundation model and automated decision-making bills.
As for more general, cross-sectoral AI governance legislation, the newest wave of proposals is characterized by a hands-off approach, often simply adjusting existing liability regimes to clarify when AI actions can cause legal liability for developers or deployers, or, conversely, establishing regulatory sandboxes, safe harbors or affirmative defenses.
However, at the same time, other legislators are doubling down on accountability with bills that propose private rights of action or explore the potential for products-liability style rules for AI systems.
For example, bills in Maryland and Michigan would impose liability on developers, and in some cases deployers, for harm caused by design defects, failure to warn or provide adequate instruction and breach of express warranties. Tennessee, through HB1951, has gone even further by introducing a bill that would establish criminal liability against deployers of AI systems that encourage an individual to commit suicide.
2. A Sacramento effect with local flavor on frontier AI model regulation
During 2025, California and New York became the first states to enact transparency-focused bills that relate to the unique catastrophic risks posed by frontier AI models — the largest and most powerful general-purpose models pushing the boundaries of AI performance. Initially, the two states implemented different regulatory approaches. However, in a rapid example of the Sacramento effect, New York’s governor signed the RAISE Act under the condition that an amendment would later be introduced to make New York’s law nearly identical to California’s.
California’s version includes documentation requirements for internal governance that also must be shared externally with consumers and enforcement authorities. The California law also includes reporting obligations when foreseeable catastrophic risks occur. These obligations focus on specific outcomes, such as the creation or release of weapons of mass destruction and foreseeable damages that would have to exceed one billion dollars or result in the death of more than fifty individuals in any single incident.
In the current session, other states, including Illinois, Massachusetts, Michigan, Nebraska, Tennessee and Utah, are following this lead by introducing legislation that would focus on catastrophic foundation model risks.
While bills such as Massachusetts’ S. 2630 reproduce the text adopted in California and soon-to-be adopted in New York, most of the other bills under consideration include unique additions that would expand the substance of frontier model regulation. For example, Illinois’ SB3312 and HB4799 introduce modifications to the state’s Freedom of Information Act exempting information obtained by the attorney general about critical safety incidents from being subject to FOIA requests.
On the other hand, Illinois’ SB3261, Nebraska’s LB1083, Tennessee’s HB 1898 and Utah’s H.B. 286 go further to include transparency and reporting requirements focused on protecting children by including obligations such as the publication of a children-specific protection plan. Illinois’ SB3261 would also require a third-party auditor to be retained to, among other requirements, produce a report on the model’s compliance with established safety plans. Meanwhile, Michigan’s HB 4668 is closer to the original version of New York’s RAISE Act but also follows the pattern of hiring a third-party auditor to assess the compliance of the developer with its own security protocol.
Although often framed as recommendation more than a requirement in other AI governance legislation, stricter third-party assessments in foundation model legislation reflect the severity of the harms these bills aim to prevent.
3. Chatbot transparency and child safety absorb the oxygen in state capitols
As the newest wave of large language model-powered chatbots have gained popularity, legislators at both the state and federal level have increasingly focused on chatbots and their relationship with consumers, with a special interest in children’s interactions. At the federal level, Congress is considering the Safeguarding Adolescents From Exploitative (SAFE) BOTs Act, a bill that creates transparency obligations forcing the chatbot to clearly and conspicuously disclose that it is not human nor a licensed professional. This bipartisan House bill is currently part of a package that may be voted out of the House Committee on Energy and Commerce this week.
At the state level, legislators across 30 states are considering more than 60 bills that centrally focus on chatbot oversight. If passed, they will join the handful of laws passed in 2025 in California, Maine, New Hampshire, New York and Utah regarding chatbots, as shown in the Future of Privacy Forum’s resource summarizing AI laws from the 2025 session.
The active bills have three main themes. Transparency is front and center, and most of the proposals would require chatbots to disclose that they are not human, but an AI system, either proactively via a notice or chat integration or in response to a user prompt. To address health care-specific concerns, other bills require a chatbot to disclose that it is not a licensed medical professional when relevant.
Other bills propose special protections for children and child-related services, content or products, such as age assurance obligations to prevent minors from using the system. They also propose altering how chatbots interact with young people when mental health is at risk or implementing other safeguards to prevent parasocial relationships with the AI model. Finally, many proposals would generally prohibit developers or deployers from making misleading claims, such as claiming the model can provide clinical or psychological advice.
4. The right to appeal automated decisions is not just for privacy anymore
Many of the existing comprehensive state consumer privacy laws include some version of a right to opt out or question automated decisions of legal significance. This right has become a staple of most new privacy bills, including those moving forward in Maine, Massachusetts and Oklahoma.
Meanwhile, bills addressing the potential harms of automated decision-making systems continue to proliferate, even as the future of the Colorado AI Act in its current form remains uncertain. This year, at least eight states across twelve bills introduced a right to appeal automated decisions, though they generally limit this to those decisions made using ADMT. Of course, this also remains a focus of some sectoral legislation, including in employment, housing and public sector contexts.
The right to appeal, as proposed in most such active bills, would establish the participation of a human reviewer with power to overturn an adverse consequential decision made by the AI system via an internal procedure that would allow the consumer to submit documentation to justify the appeal under a defined timeframe. Under this procedure, a human would be required to evaluate the decision and explain the reasoning behind it.
Though no bill with this version of appeal language has yet passed, there is a great likelihood that states considering ADMT rules will propose a right to appeal in some form as they seek safeguards against bias, errors and discrimination by automated systems in defined contexts.
5. Lawmakers question the price we pay
Probably the most rapidly expanding trend for AI governance at the state level has been lawmaker scrutiny over automated systems used to set individualized prices based on personal data, including behavior or inferences about willingness to pay. Different terms are used to describe this practice — from surveillance-based pricing to data-driven pricing to bespoke pricing — and stakeholders strongly disagree on where to draw the line between beneficial and harmful price setting.
Many of these proposed bills would categorize certain types of personalized algorithmic pricing as a deceptive trade practice with significant statutory fines. In some cases, they differentiate between dynamic pricing based on market demand or other factors and surveillance-based price discrimination, which relies on a consumer’s unique data profile. In addition to enforcement by state attorneys general, many of these bills include private rights of action.
Clear and conspicuous notice about data-driven pricing is a mainstay of these bills. In 2025, New York passed a bill with such a requirement that is currently being litigated in the Court of Appeals for the Second Circuit after the National Retail Federation challenged it on First Amendment grounds. Other pending bills go farther by setting limits on the practice or outright banning the use of certain types of data, such as protected class characteristics, to set prices.
Conclusion
AI-targeted regulation shows no signs of stopping. Regardless of differences between states, common themes seem to be arising as legislators seek to establish rules for developers and deployers to address the pressing problems caused by the explosion of AI — from concrete risks to children's and teens' mental health, to impactful automated decisions, to concerns about worst-case scenario catastrophes.
Transparency continues to be one of the main focuses as it empowers regulators and consumers to make informed choices about managing the evolving risks and benefits of these new technologies. Generally, legislators continue to deploy a consumer protection model, though we see multiple flavors of this approach toward the same goal. But there is no single standard and continued divergence is likely as we see regulators continue to identify new top-level risks even as they explore new theories of liability and mechanisms of oversight.

This content is eligible for Continuing Professional Education credits. Please self-submit according to CPE policy guidelines.
Submit for CPEsContributors:
David Botero
Westin Fellow
IAPP
Cobun Zweifel-Keegan
CIPP/US, CIPM
Managing Director, D.C.
IAPP



