Texas became the fourth U.S. state to pass a cross-sectoral law regulating the use of artificial intelligence when Gov. Greg Abbott, R-Texas, signed House Bill 149, the Texas Responsible Artificial Intelligence Governance Act, into law 22 June.

Amid the backdrop of a proposed federal moratorium on state governments' abilities to draft legislation regulating various applications of AI, Texas is forging ahead with some guardrails in place on the use of the technology, following on heels of AI governance-related laws in California, Colorado and Utah. The TRAIGA enters into force 1 Jan. 2026, one month before the Colorado AI Act.

State Rep. Giovanni Capriglione, R-Texas, sponsored the TRAIGA in the House and identified notable differences between Texas' approach versus that of Colorado. He told the IAPP, while Colorado's law seeks to regulate high-risk uses of AI, Texas' law aims to prevent and respond to harms caused by misuse of AI systems.

Capriglione opined the broader push among states to explore passing AI regulations was partly borne out of the recognition they were behind on passing consumer data privacy laws in the past. He said he wanted Texas to be proactive in attempting to get ahead of the rapidly changing AI ecosystem to have some baseline guardrails in place going forward.

"We didn't want to be overburdensome, and we wanted to try to do this in a way that is reasonable and protects people from some of the biggest harms," Capriglione said. "We thought that we had to hit the very high-level issues, and that means looking at some of the outputs that are problematic, making sure there are some disclosure requirements, we were updating our privacy laws and that we were listening to industry."

Public, private sector impacts

Key provisions of the TRAIGA include disclosure requirements for state agencies when citizens interact with AI tools a specific agency may be using, bans on capturing biometric identifiers without consent and AI developers are prohibited from creating systems designed to manipulate human behavior, make discriminatory decisions and produce deepfakes that exploit children.

The TRAIGA also establishes a regulatory sandbox contained within the newly created Artificial Intelligence Council under the state Department of Information Resources for companies to test AI models without fear of violating the law.

Capriglione said the TRAIGA will likely have different impacts on public entities and the private sector.

Under the law, public-sector entities' use of AI will be more heavily scrutinized to ensure systems being used uphold citizens' rights. In the private sector, Capriglione indicated much of TRAIGA was written to prevent businesses from knowingly deploying AI systems that cause harm to consumers, he said.

"Making sure the government is restricted in how it uses AI is actually easier to get done because it's a public process," he said. "Agencies are going to have to come up with acceptable use policies and ethics on how each individual agency may or may not want its employees to use AI based on the risk levels."

Latham & Watkins Counsel Robert Brown, CIPP/US, CIPM, PLS, said a key dynamic of the TRAIGA is its "intent element," which means entities developing and deploying an AI model would have to have been found that they disregarded key requirements of the law while creating or using the model to be found in violation.

"Governmental agencies will feel the greatest impact, as many of the requirements under the final version of the bill apply exclusively to them," Brown told the IAPP. "The impact on private companies will be more limited — the law prohibits them from developing or deploying AI systems for various illicit purposes, but critically, each of these prohibitions includes an 'intent' element."

Key enforcement provisions: Per-violation penalties?

The TRAIGA empowers state agencies to issue fines up to USD100,000 to any licensed individuals or organizations for violations caused by misuse of their AI systems.

Capriglione said the law, as written, allows for per-violation penalties similar to how the Illinois Biometric Information Privacy Act functioned prior to reforms passed in 2024 that removed the per-scan violations, due to in part to significant fines issued to businesses for noncompliance. If covered entities do not rectify a violation within a 60-day cure period, the attorney general may assess an administrative fine "of not less than USD80,000 and not more than USD200,000 per violation," according to the statute.

The per-violation nature of enforcement could prove costly. Capriglione outlined a hypothetical scenario where if an insurance company deployed an AI tool that evaluates homeowners' viability to receive a policy and wrongfully denied 5,000 of them insurance, that insurance company could be liable for each erroneous denial their AI tool rendered.

However, he also said the intent of the TRAIGA is not to grossly penalize businesses using AI and includes provisions to ensure that good faith compliance efforts on the part of AI deployers resulting in unexpected violations do not receive major fines. Unlike the BIPA, the TRAIGA does not allow for a private right of action.

"We allow for an opportunity to cure (the unlawful activity)," Capriglione said. "We provide sufficient time for someone to go in and fix their violation. And that is a benefit to the business, which is as long as they fix the problem, they'll avoid penalties."

Brown said much of how the TRAIGA's penalties may ultimately be decided on a case-by-case basis by the state attorney general's office.

"While we don't yet know how the Texas Attorney General will interpret TRAIGA, the prohibitions under the law apply to the development, deployment, and/or distribution of AI systems for certain purposes," Brown said. "The law is to be 'broadly construed,' and the attorney general has not been shy about bringing enforcement actions centered on the use of AI technologies in recent years. It's also worth noting that unlike under BIPA, TRAIGA is exclusively enforced by the Attorney General and does not provide a private right of action." 

Potential state AI moratorium looms large

Still, the prospect of Congress approving a 10-year ban on enforcing AI legislation raises uncertainty over TRAIGA's enforcement and other states' AI legislative work.

Capriglione said the majority of the work on TRAIGA and its filing were completed before Congress floated moratorium in its reconciliation bill. He does not support federal lawmakers including the AI provision because the proposals so far are simultaneously too vague and too proscriptive. For example, he said the moratorium's lack of clarity could prevent municipalities from approving data centers being constructed.

Regardless of what the final language of the moratorium may say, Capriglione said it will likely lead to "thousands of court cases," if it passes.

"I appreciate all the federal government does, however they have not really been able to work on super complicated, technical things like this for a long time and actually get them passed," he said. "I would make the case (Congress) is still quite a ways way from having something in place that will sufficiently protect my constituents here in the state of Texas."

Brown believes the final iteration of TRAIGA was written with an eye toward the moratorium, although he doesn't think the law in any form will remain in place if Congress ultimately passes its reconciliation bill with the moratorium included.

"It's possible one of the goals of scaling back the law was to placate federal lawmakers enough to avoid a complete moratorium on state AI laws," Brown said. "Given how broadly the proposed moratorium is drafted, there's really no version of this bill that could survive it."

Capriglione also said while the TRAIGA is the first AI governance law passed in Texas, it is unlikely to be the last governing how the technology is used within the state.

"I've spent the last six years working on these policy issues, and it's important that we continue our work because the technology is just changing so quickly,” Capriglione said. "I'm happy to work with anybody, anywhere, any time on crafting really good AI policy."

Alex LaCasse is a staff writer for the IAPP.