The signing of a bill requiring certain disclosures of artificial intelligence companies into California law could be a bellwether for future safety accountability efforts in the U.S.
The Transparency in Frontier Artificial Intelligence Act, or SB 53, represents a landmark effort to require frontier AI developers to publicize their efforts to make their products safe, including what standards it has incorporated into their frameworks and a method for concerned individuals to report safety incidents to the state.
It could be a sign of more transparency-focused legislation to come. Efforts to create comprehensive state AI laws have struggled in the face of heavy resistance from the technology and business sectors as well as discouragement from the federal government. Plus, some state governors worry about possible effects AI regulation could have on the economy. As a result, state-level regulators could turn their efforts to more targeted approaches, something IAPP Managing Director, Washington, D.C., Cobun Zweifel-Keegan discussed in a recent column.
"I think we're moving past the initial wave of laws in relation to deep fakes and highly regulated industries into this new kind of world of, 'How are the states going to regulate?' Is it going to be the Colorado model of dividing the world into risk-based approach to AI, or is it going to be more of this kind of safety-focused legislation, like SB 53," said Shannon Yavorsky, partner at Orrick, Herrington & Sutcliffe.
SB 53 has its roots in the closely watched SB 1047 from recent years, which Gov. Gavin Newsom, D-Calif., vetoed after resistance from the tech groups which call the state home and even some of California's congressional delegation. But Newsom has shown some willingness to rein in AI with approval of bills requiring training data disclosures and AI-generated material watermarks. He convened a group to study ethical AI guardrails, which much of SB 53 is based on.
Newsom, in his approval message, said the bill could shape AI regulation beyond California. He urged lawmakers to keep an eye on AI regulation at the federal level and said more work may need to happen if something does pass to ensure alignment.
"Our state's status as a global leader in technology allows us a unique opportunity to provide a blueprint for well-balanced AI policies beyond our borders – especially in the absence of a comprehensive federal AI policy framework and national AI safety standards," he said.
SB 53's sponsor, state Sen. Scott Wiener, D-Calif., said the bill represents a chance to help establish better trust, fairness and accountability standards for AI as it continues to grow and change.
"With a technology as transformative as AI, we have a responsibility to support that innovation while putting in place commonsense guardrails to understand and reduce risk," he said in a press statement. "With this law, California is stepping up, once again, as a global leader on both technology innovation and safety."
The bill requires a "large frontier developer to write, implement, and clearly and conspicuously publish on its internet website a frontier AI framework that applies to the large frontier developer's frontier models." It applies to companies with annual revenues of at least USD500 million. Violators of the law could face a civil penalty of up to USD1 million.
The Office of Emergency Services will create a method for members of the public to report potential critical safety incidents and for covered developers to confidentially submit summaries of potential catastrophic risks associated with their models.
Covered critical safety incidents include loss of control or unauthorized access to a model which could cause death or injury as well as when models use deceptive techniques to subvert monitoring.
SB 53 will protect whistleblowers who raise concerns about frontier models as well; their complaints, as well as reports of critical safety incidents and catastrophic risk assessments, would be exempt from information access laws.
Additionally, the bill established a consortium within the Government Operations Agency to develop a framework to create a public computer cluster. Its job will be to study ethical, equitable and sustainable AI.
Lily Li, a lawyer and founder of Metaverse Law, said the penalty structure is a significant change between SB 53 and its predecessor. SB 1047 has allowed the attorney general to impose penalties for harms resulting from AI. It also imposed audit requirements where penalties could be assigned.
"I think there were concerns there too about, are these third-party audit requirements too stringent, and are the penalties going to be too onerous?" she said.
Li said requiring disclosures could lead to private rights of action from consumers, who could use companies' disclosures as evidence for unfair and deceptive trade practices under state or federal laws. She said the whistleblower provisions could in turn create change within companies, "because here you have your own employees that have the right explicitly to complain to regulators and then have job protections and legal protections as well after that."
The bill had broad support from AI safety advocates, who told Newsom in a letter the bill could prevent similar harms stemming from lack of regulation of social media companies from happening again. The matter is more urgent as companies' AI products are being increasingly integrated into their other products, the group argued.
"By taking action now, we can avoid the grave harms that resulted from letting social media run unregulated over our children for over a decade," the letter read.
The bill did not encounter public opposition from companies such as OpenAI and Meta but met with resistance from technology lobbying groups, Politico reports. The Chamber of Progress characterized the bill as imposing "sweeping restrictions" which will "chill" entrepreneurship in the state going forward.
And Meta in particular looks poised to lead political resistance to future AI governance efforts. It plans to launch two super PACs to support AI-friendly candidates — including one targeted at California.
Notably, one large AI company, Anthropic, stood by the bill, saying it would put in place standards it already follows and provide assurance to developers.
Without SB 53, "labs with increasingly powerful models could face growing incentives to dial back their own safety and disclosure programs in order to compete," the company wrote in a blog post.
"But with SB 53, developers can compete while ensuring they remain transparent about AI capabilities that pose risks to public safety, creating a level playing field where disclosure is mandatory, not optional."
Caitlin Andrews is a staff writer for the IAPP.