Virginia lawmakers are the first to pass a bill this year regulating how certain types of artificial intelligence are used.

The High-Risk Artificial Intelligence Developer and Deployer Act has similar hallmarks to a bill passed in Colorado last year and rules being hammered out in California. It aims to define what kinds of machine-based learning systems count as "high risk" — such as products that help make consequential decisions around education, employment, financial services, health care and legal services — and puts requirements around how they can be used. It passed the Virginia Senate 20 Feb., and through the House again after the lower chamber agreed to some amendments.

But there are a few key differences that have seen the bill garner resistance from civil interest groups, which are often supporters of bills looking to put guardrails around AI technology, alongside groups arguing AI regulation hurts innovation. It will be a litmus test after President Donald Trump's administration made it clear it is skeptical of regulating AI.

"We have spent over a year working on this bill with tens and tens of stakeholders," said Del. Michelle Lopes Maldonado, D-Va., the bill's primary sponsor, speaking at a 12. Feb. General Law and Technology Committee meeting, "and every single one of them that approached you got something out of this bill, and every one of them did not get something into this bill."

The bill's fate is uncertain. Gov. Glenn Youngkin has not publicly indicated his position on it; he said AI presents both risks and opportunities in creating an advisory task force last year and put in place standards for the technology's use in state government. He has until 24 March to sign, veto or return the bill with amendments.

The governor's office did not return a request for comment. His statement on the end of the session did not allude to any specific bills.

Should the bill pass, it would take effect 1 July 2026.

What is in the bill

Developers are required to disclose the risks, limitations and purpose of AI products categorized as high-risk systems, along with a summary of how its performance was evaluated for performance and mitigating algorithmic discrimination before being made available.

Deployers are expected to use a "reasonable duty of care" to protect consumers from any foreseeable algorithmic discrimination risks. Anyone using AI to make consequential decisions must have a risk management policy in place. Like in Colorado, those whose practices conform to the National Institute of Standards and Technology's AI risk management frameworks or the ISO/IEC 42001 standard or another recognized policy are considered to be in compliance with the bill.

The bill does not include access to governmental services such as Medicaid and SNAP among its high-risk categories and it does not require consumers be informed if an incident occurs, like the Colorado law. Documentation does not require a summary of training data, but it does require impact assessments to disclose what data was used if the system was fine-tuned by the deployer.

The Virginia attorney general would be given the authority to enforce the bill should it become law. Violations could carry a civil penalty of at least USD1,000 and no more than USD10,000.

Reactions

Supporters of the bill cast it as a critical step in bringing accountability to AI developers and deployers. The Transparency Coalition, a nonprofit focused on the risks of generative AI and related policies, said it would bring clarity and accountability to AI developers.

"The broad definition of 'algorithmic discrimination' provides protections that benefits all Virginians," wrote Adam Cappio, a technical policy analyst for the Coalition. "By specifying the NIST and ISO standards for risk management, the bill makes clear to the industry what is required to be in compliance with the law. While we believe the bill may shift too many obligations away from developers and on to deployers, no bill is perfect, and we believe that, in practice, the deployer obligations are reasonable."

Resistance came from the technology industry, with the Chamber of Progress arguing the bill would hurt AI innovation without making significant protections to civil rights. Brianna January, the association's director of state and local government relations, argued it is often difficult to determine whether discrimination happens due to the data a system is trained on or due to the human signing off on its decision.

"Regardless of origins, there must be avenues to address circumstances of discrimination that are consistent whether the abuse is online or offline," she wrote.

The bill was also notably critiqued by technology civil interest groups such as the Electronic Privacy Information Center, who said only regulating AI systems intended to make automated consequential decisions and exempting the insurance and health care industries from oversight makes the bill "weak" in comparison to other efforts.

"H.B. 2094's many loopholes will allow companies to decide that their use of AI does not fit within the uses covered by the bill and that they do not have to comply," the nonprofit wrote in a statement on the bill's passage.

The bill did seem to garner some support from some business sectors by the end of the legislative session, with representatives from Verizon and the Broadband Association of Virginia telling members of the General Law and Technology Committee they supported the amended version of the bill.

Still, the bill passed on tight margins, with a 21 to 19 vote in the Senate and a 51 to 47 vote in the House.

States have been moving ahead with AI legislation of their own. Connecticut is reintroducing an AI guardrails bill like the one which died last year. Some are tackling the issue anew, like the anti-algorithmic discrimination bill introduced in New Mexico.

The bill passed in Virginia last week may not be its final form; Maldonado told her colleagues in February she was open to additional changes if needed. Those would have to be addressed 2 April, when the legislature reconvenes to consider proposed changes or vetoed legislation.

Caitlin Andrews is a staff writer for the IAPP.