The first 2025 state bill aiming to influence the U.S. artificial intelligence governance landscape has been defeated after Gov. Glenn Youngkin, R-Va., vetoed a bill that would have defined certain types of AI as "high risk" and placed some transparency and reporting requirements on their usage.
House Bill 2094, the High-Risk Artificial Intelligence Developer and Deployer Act, is one of several 2025 state bills looking to put obligations on the private sector's AI use and development.
The bill would have made developers disclose the risks and limits of AI used in consequential decision-making around education, employment, financial services, health care and legal services, similar to California and Colorado legislation, and make efforts to prevent algorithmic discrimination. Those uses would have required deployers to have a risk management plan in place or follow a set of industry-supported AI frameworks.
Gov. Youngkin stressed in his veto message that Virginia has taken steps to ensure AI is used responsibly in the state government. Those steps include established safeguards and assembling a team of experts to work with his administration on specific governance challenges.
However, he said the regulatory framework of HB 2094 would stifle AI innovation in the state, a common criticism raised by a range of stakeholders as it relates to Virginia's bill and proposals around the U.S. seeking to address AI governance and safety.
"There are many laws currently in place that protect consumers and place responsibilities on companies relating to discriminatory practices, privacy, data use, libel, and more," Youngkin said in his message. "HB 2094's rigid framework fails to account for the rapidly evolving and fast-moving nature of the AI industry and puts an especially onerous burden on smaller firms and startups that lack large legal compliance departments."
Youngkin had the option to send the bill back to the legislature with amendments instead of a straight veto. And despite prior state lawmaker approval, the bill will would need two-thirds votes in both legislative chambers to override the veto. HB 2094 passed with a two-vote margin in the Senate and a four-vote margin in the House.
The veto could signal a rough road ahead for those looking to pass comprehensive AI legislation. Virginia's bill had fewer requirements than the Colorado AI Act, which led to criticism by some advocacy groups alongside industry supporters.
"The proposed AI bill in Virginia contained too many loopholes and exemptions to adequately protect Virginians from the harms of discriminatory, unproven AI systems, so EPIC was pleased to see it vetoed," Electronic Privacy Information Center Law Fellow Kara Williams told the IAPP. "States considering AI oversight must not repeat Virginia's mistakes and ensure that their legislation does not contain loopholes that end up exempting the very AI systems that the law is meant to regulate."
The Transparency Coalition argued the bill's provisions around transparency provided a good foundation for more regulation in the future.
"Transparency and accountability on the part of AI developers and deployers is a critical component for society to trust in and gain the full benefits of AI," Transparency Coalition co-founder Jai Jaisimha told the IAPP. "We hope that both the legislature and the people of Virginia will move forward with renewed energy to re-address this important issue in upcoming sessions."
Last year, only three states — California, Colorado and Utah and — passed AI governance-related bills. More targeted measures, including those looking at deepfakes and AI's use in elections, have received more consideration.
Broader alignment
Youngkin's decision also aligns with the White House and Republican-controlled U.S. Congress, which are more focused on promoting AI innovation.
The White House is not keen to adopt the safety-focused approach taken by EU lawmakers while signaling intentions to avoid regulatory hurdles challenging AI use and development. Congress has yet to act on broader legislation governing AI despite years of discussion. It remains to be seen if the White House will heed recent calls from some AI builders to put forth limited-scope legislation which would preempt state legislation.
Appearing on a recent Washington Post Live event, U.S. House Committee on Energy and Commerce Chair Brett Guthrie, R-Ky., echoed the White House's goal to avoid regulation that hampers growth, while simultaneously being mindful of data protection needs.
"We have to be careful how our data is used, but we also don't want to overregulate and destroy the AI industry," Guthrie said. "We have to be careful to know we want our data protected, but also make sure we don't overprotect so that industry moves somewhere else."
Guthrie added the approach to "regulate like Europe or California regulates" puts the U.S. "in a position where we are not competitive."
Caitlin Andrews is a staff writer for the IAPP.