Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.
As they prepare for the 2026 legislative session, U.S. state legislators are going back to basics, with the knowledge that the political will to pass expansive AI governance laws — even at the state level — has reached a palpable low point.
Before we turn to other states, we should examine Colorado, where the dust is still settling after the governor called a special legislative session, in part, explicitly to provide an opportunity to modify the Colorado AI Act. But after hundreds of hours of wrangling and negotiations, Sen. Robert Rodriguez, D-Colo., made the call to release legislators from what the Colorado Sun reported was "starting to feel like a hostage situation." The session thus resulted only in a delay in the law's implementation date to 30 June 2026.
The compressed timeline of the special session certainly contributed to the failure to reach a lasting new compromise on a slimmed down version of the Colorado AI Act, but the substance of the negotiations may also serve as a bellwether for legislative activity next year.
The final public version of the amendment, before it was modified to reflect a simple delay, is illustrative because it is the closest stakeholders came to a new compromise. It would have made major structural changes to the Colorado AI Act, introducing joint and several liability for developers and deployers that eschew the more nuanced responsibilities under the existing law in favor of a structure that would have reduced developer liability only under certain conditions.
The failed compromise on the Colorado AI Act also showcases how transparency obligations remain the unspoiled core of AI governance laws, even when substantive governance obligations are stripped away. The proposed amendment would have retained, and in some ways reaffirmed, a notice and choice approach to AI governance. Deployers would bear the responsibility to provide clear notices to individuals affected by covered systems and mechanisms for individuals to correct data used for "algorithmic decisions."
Trust, but clarify
A retreat to notice and choice is also on display in the efforts to craft new AI governance legislation in other states. At IAPP's AI Governance Global conference last week, I was honored to moderate a discussion with state Sen. James Maroney, D-Conn., and Delegate Michelle Lopes Maldonado, D-Va., two of the most active leaders in state efforts to pass consumer protections for certain AI systems.
Both lawmakers also share the distinction of having orchestrated the passage of cross-sectoral AI governance legislation, only to see their respective governors veto the measures. Nevertheless, neither has given up on the idea of baking meaningful guardrails for AI's most serious risks into state law. To that end, both plan to oversee the introduction of multiple bills in the 2026 session.
I was struck during our conversation by the importance of the principle of transparency in the legislators' response to the new political climate. Maroney framed the pivot explicitly. Reflecting on the feeling that he would not be able to pass a broader bill on consequential decisions next year, he said he would focus in on "general transparency, just a right to know." Even with plans for multiple targeted bills, Maldonado too has a focus on the right to know, though she also highlighted the importance of providing meaningful choices to consumers, such as the ability to opt out of using their data for training.
Layers of transparency: notice, disclose, explain
Transparency may seem like a minimal protection — and one that arguably is already required in many AI contexts to avoid deceptive trade practices. Nevertheless, the principle of transparency is reflected in layers of best practices across the AI life cycle. These layers also show up across legislative efforts.
- Notice of automation. Inform consumers when they are interacting with an AI system, especially when they would expect to be interacting with a human or subject to human decisions. See, for example, rules covering business-to-consumer chatbots, such as in Utah and California.
- Disclosure of consequences. Provide an explanation to consumers as to why they are seeing a special notice, alerting them that their rights or opportunities could be on the line. Clear disclosures are a basic requirement when, for example, an automated system is being used for a consequential decision under the Colorado AI Act or other automated decision-making rules.
- Transparent data provenance. When personal data is used to train AI systems, or ingested during the operation of the system, consumers should have the opportunity to find out — and, in some cases, correct it, delete it or request a re-calculation. Some legislation targeting large language models would explicitly require these types of disclosures and data rights. One could also sort "content provenance" requirements here, such as the bill in Virginia that would require interoperable labeling across platforms for generative AI outputs.
- Explainability. This goes one step further to provide insights to consumers about the factors influencing a decision or output, including how personal data was weighted in the model, again potentially triggering opportunities to correct or delete. We see this in rules to provide explanatory disclosures after an automated decision, including in the recent updates to Connecticut's comprehensive consumer privacy law.
Transparency is a prerequisite for other responsible AI principles, just as it is for the classic fair information practice principles. The manner in which these layers are interpreted and operationalized for AI systems will determine whether transparency also results in meaningful choice and redress. Whether this is determined by the rules legislators are still cooking up, or the policies and procedures companies put in place in the meantime, it is always worth approaching transparency with consumer autonomy in mind.
Thanks to George Washington University Law student Addison Dascher for contributing background research on the proposed modifications to the Colorado AI Act.
Please send feedback, updates and transparency taxonomies to cobun@iapp.org.
Cobun Zweifel-Keegan, CIPP/US, CIPM, is the managing director, Washington, D.C., for the IAPP.
This article originally appeared in The Daily Dashboard and U.S. Privacy Digest, free weekly IAPP newsletters. Subscriptions to this and other IAPP newsletters can be found here.