April showers brought May consumer protections in the Centennial State this year. In fact, Colorado collected a bouquet of new privacy and artificial intelligence laws as its legislative session wrapped up, including tweaks to the state's consumer privacy law.
The most groundbreaking legislative vehicle among them is the Colorado AI Act, a cross-sectoral AI governance law covering the public sector — the first of its kind in the U.S.
Gov. Jared Polis signed the new law Friday 17 May, though he simultaneously released a signing statement contextualizing his approval of the bill. Most of the act's major provisions enter into effect 1 Feb. 2026, though the Colorado legislature reportedly intends to study and possibly revise the bill before that time.
The final framework reflects trends from numerous state bills over the past two years, which were brought together in Connecticut's similar Senate Bill 2. But where Connecticut's bill withered on the vine following the governor's veto threat, the Colorado AI Act is now a signed law.
A substantial factor in a consequential decision
Like many other U.S. AI governance proposals, the Colorado AI Act focuses on automated decision-making systems. The law defines a covered high-risk AI system as one that "when deployed, makes, or is a substantial factor in making a consequential decision."
This "substantial factor" standard notably deviates from requirements in similar automated decision-making legislation. To qualify as a substantial factor, the AI-generated component of the decision must "assist in making a consequential decision" and be "capable of altering the outcome of a consequential decision." These impact and materiality requirements help refine and clarify the scope of covered systems.
A decision is consequential if it has a "material legal or similarly significant effect on the provision or denial to any consumer of, or the cost or terms of: education enrollment or opportunity, employment or an employment opportunity, a financial or lending service, an essential government service, healthcare services, housing, insurance, or legal service." The only defined term in the list is "health care services," which points to a U.S. code definition: "any services provided by a health care professional, or by any individual working under the supervision of a health care professional, that relate to—(A) the diagnosis, prevention, or treatment of any human disease or impairment; or (B) the assessment or care of the health of human beings."
Note the four aspects against which these eight enumerated contexts could be triggered. A consequential decision could be one that has a material effect on providing a covered opportunity or service or on denying it. It could also be one that has a material effect on either the cost or the terms of the opportunity or service.
There are important functional, technology-specific exclusions from the definition of high-risk AI system. An AI system is exempt if it is intended to "perform a narrow procedural task" or to "detect decision-making patterns or deviations from prior decision-making patterns and is not intended to replace or influence a previously completed human assessment without sufficient human review," as may be used with a teacher's grading pattern.
The list of 19 expressly excluded technologies ranges from calculators and spreadsheets to antifraud technologies — unless they use facial recognition, which remains covered — to internet architecture like domain registries, web hosting and internet caching to certain types of chatbots if governed by an acceptable use policy. Systems that meet these exemptions are not high-risk and thus fall out of the act's scope.
Discrimination is doubly illegal if it is by algorithm
The Colorado AI Act focuses on one potential harm from AI systems: bias and discrimination caused by AI in the context of a consequential decision.
As described below, the act creates duties for developers and deployers alike to use reasonable care to avoid algorithmic discrimination via covered systems.
Algorithmic discrimination means "any condition in which the use of an AI system results in an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected under Colorado or federal law, with some exceptions."
Who is covered?
The law contains general duties for both deployers and developers of high-risk AI systems. A deployer is a person doing business in Colorado that deploys or uses a high-risk AI system, and a developer is a person doing business in Colorado that develops or intentionally and substantially modifies a covered AI system. Colorado law defines a person as "an individual, corporation, business trust, estate, trust, partnership, unincorporated association, or two or more thereof having a joint or common interest, or any other legal or commercial entity."
Unlike U.S. state privacy laws, there is no threshold for application of the Colorado AI Act. That is, there is no minimum number of Colorado consumers who must be impacted by the law — or minimum operating revenue — for its obligations to apply. As a law focused on differential treatment and discriminatory impacts, this volume agnostic approach stands to reason.
Some small businesses, however, are conditionally exempt from certain deployer responsibilities. If a deployer employs fewer than 50 full-time employees, does not train a high-risk AI system with its own data, limits uses of the high-risk AI systems to those previously disclosed, and provides consumers with an impact assessment from the developer, then that deployer can skip required risk-management programs, impact assessments and general notices.
General exemptions from scope
The law contains many exemptions typical of other consumer protection legislations, such as those protecting freedom of speech, permitting product recall or repair, or enabling compliance with law enforcement. It does not restrict a developer's, deployer's or other person's ability to protect against security incidents, theft, fraud or other illegal activity, unless such use involves facial recognition technology.
The act's obligations generally do not interfere with federal AI efforts, exempting high-risk AI systems approved under or in compliance with federal agency standards that are substantially equivalent or more stringent than it.
The law exempts entities covered under the Health Insurance Portability and Accountability Act if they provide AI-generated recommendations that require a health care provider to take action to implement that recommendation. Lawmakers opted not to link the exemption of AI systems for financial use to the jurisdiction of the Gramm-Leach-Bliley Act. Instead, they more narrowly exempted banks and credit unions subject to regulatory oversight under guidance or regulations that are substantially equivalent to or more stringent than the Colorado AI Act and that require audit and algorithmic discrimination mitigation processes.
Developer duty of care
The antidiscrimination provisions take the form of a duty for developers to "use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of the high-risk AI system."
This places at least partial responsibility on developers if their system is used to contribute to decisions that result in a discriminatory outcome. There is only a rebuttable presumption that a developer used reasonable care if it has complied with all the act's substantive obligations. In the event of a discriminatory outcome from a reasonably foreseeable risk, this places the burden on the attorney general to show a lack of reasonable care — unless there is also a simple failure to follow one of the act's other developer obligations. In general, such developer obligations include documentation and disclosures to deployers, consumers and, even in some cases, the Colorado attorney general.
Developer disclosures to deployers
To avoid liability for reasonably foreseeable discriminatory impacts of AI systems, developers must "make available" to deployers a bundle of disclosures and documentation along with the system.
Besides a general statement of the "reasonably foreseeable and known harmful or inappropriate uses," of the system, the packet must include documentation disclosing the following aspects of the covered AI system:
The type of data used for training.
Known limitations, including risks of algorithmic discrimination from intended uses.
Purpose.
Intended benefits and uses.
Other information necessary for the developer to meet its obligations under the act.
Additional documentation must describe the following aspects of the covered AI system:
Methods of evaluation for performance and mitigation of algorithmic discrimination.
Data governance measures used to cover training data sets.
Data governance measures used to "examine the suitability of data sources, possible biases and appropriate mitigations."
Intended outputs.
Measures to mitigate risks of algorithmic discrimination.
How it should be "used, not be used, and be monitored by an individual" when being used for decision-making.
Any other documentation needed for the developer to "understand the outputs and monitor the performance."
Finally, the developer must make documentation necessary to complete required impact assessments available to the deployer — and, "to the extent feasible," to other developers. The law specifies such documentation may include "artifacts such as model cards, dataset cards, or other impact assessments."
Enter the age of the algorithmic discrimination statement
The Colorado AI Act requires developers and deployers to post a statement on a website or "in a public use case inventory" that summarizes how the entity manages risks of algorithmic discrimination that may arise from the development, "intentional and substantial modification," or deployment of covered AI systems. Public statements must remain accurate, with frequent updates as necessary.
For developers, the statement must be divided into each "type" of covered AI system the developer develops or modifies and makes available to a deployer or another developer. Developers are also subject to mandatory updates within 90 days every time they make an "intentional and substantial modification" to the system. This phrase is defined to mean "a deliberate change made to an artificial intelligence system that results in any new reasonably foreseeable risk of algorithmic discrimination."
Incident reporting obligations
If a covered system carries known risks or, in fact, causes algorithmic discrimination, the act requires developers to disclose the incident to the Colorado attorney general, as well as all known deployers and other developers of the system. The disclosure must be delivered within 90 days of an incident and must include information about continued risks from the intended uses of the system.
The obligation kicks in whether the incident is found by the developer, via ongoing testing and analysis, or by a deployer. Deployers also have an obligation to report to the developer when a discriminatory decision is made by the system.
Deployers have their own explanation duties. When the outcome of a consequential decision is adverse to the consumer, a deployer must provide the consumer with a statement disclosing the principal reason for the decision, including how and to what degree the high-risk AI system contributed to the decision and the type and source of data used. The consumer must then be afforded an opportunity to correct any incorrect personal data that was relied upon and appeal the decision.
Deployer duty of care
Deployers are not off the hook. Responsibility is explicitly shared between developers and deployers, with the similar substantive duties falling on those who put covered AI systems into service.
Deployers must use reasonable care to avoid known or reasonably foreseeable risks of algorithmic discrimination. Just as with developers, there is a rebuttable presumption that complying with the requirements of the act counts as reasonable care.
Mandatory AI governance programs
Deployers are expected to implement a "risk management policy and program" to govern their use of covered systems. This is explicitly required to be an "iterative process planned, implemented, and regularly and systematically reviewed and updated over the life cycle of a high-risk artificial intelligence system, requiring regular, systematic review and updates."
To meet this standard, the policy and program must also "specify and incorporate the principles, processes, and personnel that the deployer uses to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination."
Governance programs should align with existing standards — explicit reference is made to the National Institute of Standards and Technology's AI Risk Management Framework and the International Organization for Standardization's ISO 42001 — though other similarly recognized standards are generally incorporated. Endorsing a risk-based approach, the reasonable care in a governance program can also reflect the size of the deployer, the nature and scope of the AI system, and the sensitivity and volume of data involved.
Detailed impact assessments requirements
Deployers must carry out impact assessments on covered systems annually, as well as within 90 days of each intentional and substantial modification of the system.
The Colorado AI Act goes to great lengths to specify the substance and quality of these assessments. They must include the following information about covered systems:
A statement of purpose, intended use cases, deployment contexts and benefits.
An algorithmic discrimination risk analysis and the steps taken to mitigate the risk.
The categories of data processed as inputs and outputs.
Metrics used to evaluate performance and known limitations.
Transparency measures.
Post-deployment monitoring and user safeguards, "including the oversight, use and learning process established."
If modifications were made to the system, a statement disclosing the extent to which the system "was used in a manner that was consistent with, or varied from, the developer's intended uses."
Deployers must retain impact assessments for three years.
Transparency through notification
Although not solely reliant upon the "notice and choice" framework, the law still requires direct notification to consumers from a deployer when it deploys a high-risk AI system to make, or be a substantial factor in making, a consequential decision regarding that consumer.
This notification must disclose the purpose of the high-risk AI system and the consequence of the decision, identify the deployer, describe the high-risk AI system, direct the consumer toward the deployer's more detailed online website disclosures, and provide them information on their right to opt-out.
Enforcement and rulemaking
The bill becomes enforceable exclusively by the Colorado attorney general on 1 Feb. 2026. Until then, the attorney general's office could be busy; it is authorized to promulgate rules in six enumerated areas.
Among them, the attorney general's office has latitude to put forth regulation regarding the documentation, notices and disclosures developers must provide to the attorney general, deployers and other developers and post publicly. It may further make rules outlining requirements for impact assessments and risk-management policies and programs.
The attorney general's office will be able to craft certain rules that directly influence enforcement proceedings, including determining the requirements for the law's rebuttable presumption of reasonable care for compliant deployers, as well as for any statutory affirmative defenses.
Violations of the Colorado AI Act are treated as violations of Colorado's general consumer protection statute, which provides for a max civil penalty of USD20,000 per violation. Importantly, violations are counted separately "with respect to each consumer or transaction involved." Thus, it would take a mere 50 affected Colorado consumers or transactions to reach a max civil penalty of USD1 million.