Colorado state lawmakers passed a landmark artificial intelligence bill 8 May, creating a potential model for other U.S. states to look to as debate swirls on how to manage the technology.

Senate Bill 205, the proposed Colorado Artificial Intelligence Act, has some similarities to the EU AI Act in that it takes a risk-based approach to AI and establishes rules around high-risk systems, as well as creates requirements for when to disclose the use of AI.

The bill acts as a consumer protection vehicle. It requires both developers and deployers of AI to take “reasonable care” to prevent algorithmic discrimination in high-risk systems. Those are defined as any AI that makes or helps make a consequential decision, such as those related to education, employment, finances, housing, health care or legal services. The proposal also puts obligations on deployers of high-risk AI systems, including risk management and governance requirements.

Enforcement rests exclusively with the Colorado attorney general's office, which also has discretional rulemaking authority. If enacted by Gov. Jared Polis, D-Colo., SB 205 would take effect 1 Feb. 2026.

States have seen large numbers of AI-related bills introduced this session, but only some have made it into law, as some lawmakers have been wary of being the first state to make a regulatory move. That reasoning stymied Connecticut’s own AI bill after Gov. Ned Lamont, D-Conn., said states should work together on the issue.

It remains unclear how Polis, who has a background in tech entrepreneurship, will approach the bill. He has not offered a position on the bill but has been watching its progress, according to his spokesperson, Shelby Wieman. She said after the vote Polis would review SB 205's final text carefully once it reaches his desk.

“This is a complex and emerging technology and we need to be thoughtful in how we pursue any regulations at the state level,” she said.

Wieman added Polis appreciated the creation of a task force via a separate bill also approved by the legislature that would review the law and propose changes before it goes into effect. Those measures were put in place after members of the technology industry voiced concerns the bill might be rushed.

Substance of the bill

The responsibilities on those developing high-risk systems include required transparency regarding known risks of the system along with documentation on the type of training data used and how it was evaluated prior to use. There must also be information given to help users conduct an impact assessment.

Deployers must also create a risk management policy and governance program. They are required to conduct an impact assessment and notify customers if a consequential decision is made using a high-risk system. Consumers are allowed to appeal those decisions, and any instances of discrimination must be reported to the attorney general.

The bill also contains a rebuttable presumption that developers and deployers used reasonable care if they followed those provisions. It also creates an affirmative defense meant to avoid litigation if those parties are complying with a nation or international risk management framework for AI designated within the law or by the attorney general or they take specific steps to discover missteps.

There are several exemptions aimed at clarifying developers and deployers are not restricted from complying with other local, state or federal regulations, using high-risk systems or conducting specified research activities within the state. And the Colorado attorney general’s office has the sole authority to enforce the law and can choose to create rules around how it will work.

Afiniti Associate General Counsel for AI, Privacy and Security Kris Johnston, CIPP/E, CIPP/US, CIPM, CIPT, FIP, said AI governance leaders should not wait until the bill goes into effect to have governance programs in place and understand the risk of the systems they are using, even if Colorado does become the first state to pass an AI law. She noted many states are not waiting for federal regulation to begin studying and passing laws around AI, something companies should be prepared to address.

“That said, many stakeholders believe that a uniform federal approach, compared to state by state, would be preferable,” she said.

The lack of private action puts the law on the less-strict side of the enforcement spectrum, according to Mayer Brown Partner Dominique Shelton Leipzig, CIPP/US. She also noted the law previously included restrictions on generative AI, which were cut from the final product.

But the law still aligns with other global standards Leipzig said she has seen, particularly in the treatment of how high-risk systems should be managed.

“This idea that the responsibility for ensuring trustworthy AI lives with the providers of the technology as well as the deployer, that’s very important to understand,” she said. “Nothing in this law is really substantially different from other omnibus AI, regulatory or legislative frameworks we are seeing around the world."

The legislative process

SB 205 did not arise in a vacuum. State Sen. Roger Rodriguez, D-Colo., made a point of telling colleagues during an April hearing he had been working across state lines with state Sen. James Maroney, D-Conn., to craft the legislation and create a path for other states to pass laws.

The comments were meant to assure that Colorado was not wading into the AI regulation arena alone. But Rodriguez also stressed the state — one of the first to pass a comprehensive privacy law — should not wait for others to catch up.

“At the end of the day, everyone’s concerned because we’re in a groundbreaking place when it comes to technology,” he said. “But every year we wait it gets harder to unravel.”