Artificial intelligence governance is maturing by the day, with AI laws, policies, standards and best practices either in development or coming online.

A threshold question for each of these benchmarks is their approach to roles and responsibilities. In an AI marketplace, where long supply chains are common, which business should institute what safeguards? And how should they be held accountable if consumers are harmed? Getting this right is essential for effective AI governance.

Privacy professionals are familiar with the importance of clear roles and responsibilities. An organizing principle for governing personal information is the distinction between controllers and processors. The EU General Data Protection Regulation, comprehensive privacy laws in U.S. states and international standards all make this distinction.

But what about the domain of AI governance, in which identifying and managing additional risk, such as unlawful discrimination, is also front and center?

The state of Colorado offers a compelling answer. In May, it enacted a first-in-the-nation law regulating AI developed and used for "consequential decisions," such as hiring, lending and housing.

TheColorado AI Actdistinguishes between deployers and developers of these high-risk tools and outlines parallel obligations for addressing risks. Colorado's partnership model — which reflects an emerging consensus in AI governance — can be a template for policymakers and governance professionals alike.

The Colorado AI Act

The Colorado AI Act seeks to protect consumers from the risks of algorithmic discrimination. Although the AI marketplace is home to different business models, the law targets theorganizations closest to these potential harms. These include the deployers that interact with consumers and use AI tools to make "consequential decisions" and the developers that create and provide AI tools directly to deployers for their use.

Unlike long-standing civil rights laws, which seek to remedy discriminatory conduct, the act requires businesses to take affirmative steps to mitigate the risk of these harms occurring in the first place.

Under the Colorado law, deployers and developers each have a duty of care to avoid algorithmic discrimination. They can meet this duty by fulfilling parallel obligations.

Deployers must institute governance programs that identify, document and mitigate risks and align with the U.S. National Institute of Standards and Technology's AI Risk Management Framework, the International Organization for Standardization's IEC 42001 or an equivalent standard.

Before using AI tools to make consequential decisions about consumers, deployers must carry out an impact assessment. They are also responsible for informing consumers about their use of AI and responding to consumer rights requests.

Developers have complementary obligations. They must disclose to deployers any known or reasonably foreseeable risks to consumers posed by the AI tools they make available for high-risk uses. Although developers are not required to institute governance programs, they would find it difficult to meet these requirements without doing so.

Developers must also disclose additional information to deployers, such as the type of data the AI was trained on, and to the public, such as the types of AI they sell for high-risk uses.

As in data privacy, where a business can be a controller and processor, the Colorado AI Act notes businesses can act in both roles and attempt to streamline duplicative obligations.

Deployer and developer: An emerging consensus

Colorado was the first to codify deployer and developer into U.S. law. But these terms are not unique to the Centennial State.

The Colorado AI Act is the culmination of a multistate and bipartisan effort by lawmakers in Connecticut, Virginia, Texas, Colorado and others to develop common AI legislation.

The law's consumer protections can also be traced back to Assembly Bill 331, a bill introduced in California in 2023 and whose successor died in the state legislature. A recentreport by the Future of Privacy Forum highlights trends among state AI regulation.

Outside state capitals, AI policy proposals commonly distinguish between deployers and developers. The Senate'sbipartisan AI roadmap uses these terms, as do multiple Senate bills aimed at governing "high-impact" AI, advancingAI standards, and AI and government procurement.

The EU AI Act, which recently entered into force, distinguishes between deployers and providers of AI systems, assigning specific obligations to each. Australia's recentpolicy proposal and the Association of Southeast Asian Nations' Guide on AI Governance and Ethics also use deployer and developer language.

Why have policymakers in Colorado and other jurisdictions converged on this approach? Practicality for starters.

Organizations have different levels of control and visibility into AI tools and how they are used in high-risk contexts. Deployers determine how AI is configured, implemented and used to make consequential decisions, but they typically don't design AI tools. Developers decide how AI is designed and trained. But they cannot control or reliably know, access or anticipate their customers' data or use of AI to make consequential decisions.

A workable and effective AI governance framework recognizes these distinctions and requires apartnership between both parties to manage AI risk.

Supporting better risk management

The partnership model also supports better AI risk management and aligns with emerging federal benchmarks. The alternative — placing requirements solely on one party — misunderstands AI risks and is in tension with well-established law, particularly in employment.

As the NIST hasunderscored inguidance, harmful bias can be introduced at different points in the AI life cycle. A deployer's implementation of an AI tool can surface new risks. For instance, a developer using its generalized aggregated data to assess an AI tool before sale can yield different results than a comparable assessment of the AI tool done by any one deployer under real-world conditions.

The Organization for Economic Cooperation and Development hasrecognized this dynamic as AI evaluated "in the lab" versus AI evaluated "in the field." A developer's assessment can't substitute for a deployer's and vice versa. Both must do their part.

Federal benchmarks increasingly recognize this point. In its AI governance memorandum, the U.S. Office of Management and Budgetdirects federal agencies to test "rights-impacting AI" in a "real-world context."

In a landmarkconsent decree, the U.S. Federal Trade Commission required a deployer to test AI tools in a manner that "materially replicates" the conditions where they will be used.

Developers providing AI to hundreds or thousands of customers generally don't have this information and can't replicate these contexts.

In a blog post, the FTC has likewisewarned developers about marketing AI as bias-free. A developer's claims based on testing done in the lab may not meet the FTC's well-established principles for claim substantiation.

Cracks in the partnership

Colorado does an admirable job advancing a partnership model of AI governance. But it is not without its cracks. Under the Colorado AI Act, deployers can exempt themselves from carrying out impact assessments under certain conditions. Although narrow, the exemption raises three issues.

First is false assurance. An impact assessment conducted by a developer cannot capture the risks specific to a deployer's use of a high-risk AI tool. A single impact assessment, done in the lab, cannot reliably anticipate the consequential decisions or context-specific risks of hundreds or thousands of enterprise customers. As a result, deployers and consumers may be using these tools under the assumption that risks have been appropriately mitigated, when they have not.

Second is tension with existing law. Consider the workplace. Employers cannot delegate their duty to not discriminate against employees or job candidates to a vendor, a fact recentlyreiterated by the U.S. Department of Labor. Under federal law, if an employer uses a tool that results in adverse impacts on a protected group, they are required to scientifically validate the use of the tool. While a vendor's — that is, a developer's — assessments may be helpful, this is ultimately the employer'sresponsibility.

Another area of tension is existing privacy law. The Colorado Privacy Act requires controllers to conduct impact assessments if their automated decision-making poses a risk of unlawful disparate impact. Businesses using AI to make decisions may be required to assess their tools for risks under the state's privacy law but not the Colorado AI Act, yielding uneven consumer protections.

Third is precedent. In California, lawmakers borrowed and expanded the Colorado AI Act's exemption. They came close to passing AB 2930, a bill that would have governed AI in employment contexts but that allowed some employers to exempt themselves from carrying out impact assessments. The bill banned algorithmic discrimination only if a deployer identified a risk of harm and left it unaddressed. Deployers, however, could have exempted themselves from this ban by not carrying out impact assessments in the first place. Although its goals were laudatory, AB 2930's muddled approach to deployer and developer issues likely contributed to its failure to pass the California legislature.

The 'Denver effect?'

Lawmakers around the country increasingly see Colorado's law as a template for future AI regulation. At the same time, the state's leaders arereviewing how to refine the act to ensure its requirements are workable and meaningful.

Whether in Denver, Washington, D.C., or elsewhere, policymakers would do well to preserve and strengthen one of the law's essential elements: the partnership between AI deployers and developers.  

Evangelos Razis, CIPP/E, is senior manager, public policy, for Workday.