Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.
Less than a year after the ink dried on the Colorado Artificial Intelligence Act, legislators have unveiled a proposal to significantly overhaul the framework. The new draft bill, known as SB 318, was introduced 28 April by the framework's original champion, Sen. Robert Rodriguez, D-Colo., along with Rep. Brianna Titone, D-Colo.
The draft appears to take to heart much of the feedback received since the passage of the original bill, which built on the public reluctance Gov. Jared Polis, D-Colo., expressed when signing it into law. At the time, Polis encouraged legislators to tighten the law's definitions, narrow its scope and streamline governance requirements. The revisions would seek to check all these boxes and more.
Yet time has almost run out for the Colorado legislature to act on this late-breaking proposal before the end of the state’s legislative session on 7 May. If the amendments fail to pass, the existing Colorado AI Act will go into effect on 1 February 2026.
Whistling a new AI governance tune
The changes brewing in Colorado are part of a larger trend in AI governance policy.
Like the call of a meadowlark in springtime answered by his rival in the next territory, state policymakers continue to try to outdo each other on the issue of AI governance. Yet the tone and timbre of their efforts has shifted markedly over the past year as the entire AI policy conversation has pivoted from prioritizing risk management to privileging innovation. Policymakers and industry groups alike seem to be narrowing their ambitions for mandating any embrace of AI governance best practices.
In the same week the Colorado Legislature began to consider this long-rumored reset, the California Privacy Protection Agency announced plans to drastically scale back its long-delayed automated decision-making rules, as IAPP’s Alex LaCasse reports. In fact, the two revised proposals often echo each other, including in surprising ways such as new language in both frameworks regarding expectations for consumers’ ability to question or contest automated decisions.
Smaller guardrails, but still high tensile strength
Overall, the Colorado edits would represent a significant scaling back of Colorado's AI governance requirements for automated decision-making technologies — with reduced scope, more time to comply, a tiered implementation schedule for smaller companies, and pared back obligations on developers and, to some extent, deployers. The latter is only to a limited extent because the draft also rebalances responsibility between entities, expanding the prescriptions that apply to those who deploy AI systems, especially in certain contexts.
For those paying attention, the hints of these pending changes have not been subtle. In February, Colorado's AI Task Force released a report with recommendations for revising the state's framework, but fell short of proposing legislative text. Most of the task force's priority issues are reflected in the draft amendments, though since the task force largely could not agree on its expectations for revisions, it is hard to say whether the members of the task force would be satisfied with the new draft.
Define, exempt and delay
Definitions, often a starting point for critique of the framework including by the task force, have been significantly revised, including "algorithmic discrimination" and "consequential decisions." The existing law created its own legal test for algorithmic discrimination. The revisions would eschew this and instead reiterate that a discriminatory algorithm is one that is used in a way that violates existing local, state, or federal anti-discrimination laws.
In contrast, the types of "consequential" decisions triggering scrutiny under the law would remain largely the same, with the major exception of the financial context, which has been significantly narrowed to focus on consumer transactions.
The employment context, too, has been narrowed under the bill with a new exception that would embrace a phasing-in approach for automated employment decisions. This is somewhat unusual as this context has been historically the most-scrutinized area of AI adoption.
On the developer side, the bill would narrow the scope of covered entities in a few significant ways. An entirely new exemption in the bill would provide developers with a get-out-of-jail-free card if they meet basic requirements for disclosure while releasing their AI systems with open model weights. The bill would also remove from the definition of developers those who "intentionally and substantially" modify AI systems — meaning only those who initially create covered systems must meet developer requirements — while also exempting entities covered under the Fair Credit Reporting Act as well as small-to-medium businesses under a new multitiered scoping requirement.
Two major requirements for both deployers and developers would be removed under the proposal. The mandatory incident reporting requirement, which currently will require notification to the Colorado attorney general, would be removed. And there would no longer be a duty of care for consumers from known or reasonably foreseeable risks of algorithmic discrimination from covered AI systems.
As hinted above, deployers end up carrying more weight than under the existing law. They are narrowed in one significant way related to the scope of covered decisions under the law. The proposal would add language making deployer obligations only applicable when covered AI systems are used as the "principal basis" to make a consequential decision, adopting a definition for this reminiscent of Colorado's privacy rules, which would apply when there is a lack of "meaningful human involvement."
The most aggressive enhancements to the framework apply primarily to deployers, including a substantially revised and restructured right to appeal consequential decisions, with new terms to unpack such as "competitive" and "time-limited" decisions as well as newly enhanced disclosure requirements for the use of covered AI systems.
One overarching change would be to push back the effective date of the law by almost a year, to 1 Jan. 2027, though the consumer disclosure requirements for deployers would actually be moved to an earlier effective date of 1 May 2026.
Is this as good as it gets?
Stakeholder reviews are already mixed.
For example, an article in the Colorado Sun quoted Matthew Scherer of the Center for Democracy and Technology explaining why the new bill receives a mixed review from the civil society perspective.
"Industry got nearly all of the changes it wanted, while public interest groups got only a fraction of what we wanted," he said. "That said, while the bill strips the law down to its foundation, that foundation is still there and it's still strong. Labor, consumer and civil rights groups are still processing, but I think there's an understanding that the tech industry has spent a year trying to make an example out of Colorado and is feeling buoyed by their power in D.C., and this might be the best we can get right now."
Please send feedback, updates and legislative time turners to cobun@iapp.org
This article originally appeared in The Daily Dashboard and U.S. Privacy Digest, free weekly IAPP newsletters. Subscriptions to this and other IAPP newsletters can be found here.