U.S. state-level enforcers and lawmakers say looking to existing digital policy debates can provide creative ways to put guardrails on artificial intelligence as comprehensive solutions remain elusive.

Attorneys general from Massachusetts and Texas told IAPP AI Governance Global North America 2025 attendees that it has become clear, three years after generative AI began to take off, that there is a broad concern around child online protection, consumer rights and AI-generated media, based off the bills enacted by state legislatures across the country in recent years. They predicted those targeted issues will likely continue to gain traction in lieu of more cross-sectoral or comprehensive proposals — like the approach to comprehensive privacy laws passed in 19 states — while enforcers will look to see what standing laws cover AI technology.

"I think at least for the short term, we're going to see a lot of specific laws targeting specific use cases, and then there'll potentially some more broader AI frameworks that will pass across the country," said Tyler Bridegan, CIPP/E, CIPP/US, CIPM, director of privacy and tech enforcement at the Texas Attorney General's Office.

Even if a patchwork of laws develop across states, Massachusetts Assistant Attorney General Jared Rinehimer, CIPP/US, CIPT, said those laws give businesses some rules to go by. Enforcers are often not working in silos, either, he added.

"The other note I think I'll just make is that states work together, right? We're in touch with all the New England states, but not just them — you know, California, New York, Texas, Illinois, Washington, Oregon, right? Everybody who's doing all of this work talks to each other," he said.

The comments come as the U.S. legal landscape around AI is in flux. 

Congress passed a law to regulate deepfake images created through AI, while U.S. Sen. Ted Cruz, R-Texas, recently proposed an AI regulatory sandbox and legal exemptions for AI developers to build out products and services. Meanwhile, the U.S. Federal Trade Commission is investigating AI companies' data practices and chatbots acting as companions

Class-action lawsuits around copyright and chatbots affecting children's mental health have become more commonplace, as have those related to existing wiretapping and eavesdropping laws, according to Orrick, Herrington & Sutcliffe Partner Nicholas Farnsworth, CIPP/US, CIPT, PLS.

"There's opportunities right now for class actions to be fairly creative, because there's not a lot of court cases that have settled and analyzed what AI means under these traditional laws to bring these types of class actions, particularly under statutes where there are statutory damages," he said.

Farnsworth pointed to efforts like those put forth in Utah, which wrapped in AI under the state's existing consumer protection law, and bills focused on regulated industries like health care as places where those concerned about AI's harms are finding legislative gains.

Creating the next set of rules

State lawmakers find they are navigating how to address AI harms at a time of broad resistance by the White House and AI companies.

Although a moratorium to penalize states for raising AI guardrails was ultimately abandoned in Congress' reconciliation negotiations earlier this year, the idea is still a piece of the White House's AI Action Plan. During a Sept. 17 Axios event, Sen. Cruz floated the possibility that the measure could come up in another bill, but said he and the White House are still working on the details.

U.S. consumers, meanwhile, are increasingly becoming more concerned about AI, the Pew Research Center found in a recent report. Those feelings undergird a sense of urgency for lawmakers, said state delegate Michelle Lopes Maldonado, D-Va.

"It is inevitable that we will get to a place where more people than not will demand to know more, to be able to make more decisions about what happens with their data," she said.

Not having that kind of control or guardrails around AI "fuels this distrust, and it doesn't help that at the federal level, the actions haven't been taken, and at the state levels, we've tried to make steps forward, and it's very hard to reconcile the two in this current climate," she said.

Maldonado sponsored an AI protections bill that was ultimately vetoed by Gov. Glenn Youngkin, R-Va. State Sen. James Maroney, D-Conn., who also spoke at AIGG, has also struggled to pass an AI bill in recent years after seeing success in their respective legislatures.

In both cases, the governors expressed concern about affecting the AI technology industry negatively.

But Maroney said he still hopes to put in place general transparency requirements for AI, similar to what Connecticut did when adding provisions around significant profiling to its comprehensive privacy law earlier this year.

While he is concerned about AI algorithms' impact on housing and employment, Maroney said there must be recognition of the political realities around AI right now. That does not mean stopping the fight to put protection in place, however.

"I'm not going to change what I fight for. Right? It's innovation and opportunity," he said. "I may have to change the way I talk about what I fight for, and that's one of the things that we've been working on."

Caitlin Andrews is a staff writer for the IAPP.