Transparency and disclosures will be required of frontier model AI developers in New York after Gov. Kathy Hochul, D-N.Y., signed the Responsible AI Safety and Education Act into law 19 Dec.

The final text approved by Hochul dials back developer requirements compared to the text passed by the New York State Legislature this summer, but it tracks closely with elements of California's Transparency in Frontier Artificial Intelligence Act. The changes came about after pressure from the technology industry collided with bill sponsors' desires to put stronger guardrails on AI.

The law covers companies with more than USD500 million in revenue. It takes effect 1 Jan. 2027.

ADVERTISEMENT

Syrenis ad, a privacy professional's AI checkilist

"This law builds on California's recently adopted framework, creating a unified benchmark among the country's leading tech states as the federal government lags behind, failing to implement common-sense regulations that protect the public," Hochul said in a statement.

In the same statement, state Assemblymember Alex Bores, D-N.Y., added the law "raised the floor for what AI safety legislation can look like" while setting the foundation "for greater disclosure, learning, and legislative action in years to come."

The bill is the first major state AI legislation to pass in the face of the recent White House executive order seeking to limit the impact of state AI laws through U.S. Department of Justice lawsuits and decreased federal broadband funding.

The changes

The original RAISE Act required covered entities to adopt safety and security protocols before a model is released and provide those protocols to relevant authorities. It also compelled frontier model developers to conduct annual safety reviews and disclose safety incidents to the government within 72 hours.

It also called on developers to anticipate the possibility of critical harm stemming from their products and allowed the New York attorney general to seek penalties of up to USD10 million for the first offense and up to USD30 million for subsequent infractions.

With the final text, the 72-hour reporting period remains, and developers are still required to publish safety plans, although it is no longer a requirement to do so before releasing models.

The final text also requires those plans to detail how they will handle various risks. A new AI office will be established in the Department of Finance to monitor AI development. Fines have been reduced to up to USD1 million for initial infractions and up to USD3 million for subsequent violations.

The road to passage

The state legislature originally passed the RAISE Act in June, but Hochul used nearly all the time available to her before signing it.

Under New York statutes, the governor has 10 days to act on the bill once they request it be sent to their desk while in session. Out of session, the threshold for action is 30 days. If Hochul had not acted on the bill by the end of the year, it would have been pocket vetoed.

The signing delay highlights the delicate balance between safety and innovation many state legislatures are grappling with as they consider AI proposals. In Hochul's press release, her office acknowledged how AI is ushering in "groundbreaking scientific advances leading to life-changing medicines, unlocking new creative potential, and automating mundane tasks" while noting "the potential for serious risks."

In a press release, state Sen. Andrew Gounardes, D-N.Y., the bill sponsor in the upper chamber, characterized the law as an "enormous win for the safety of our communities, the growth of our economy and the future of our society."

"The RAISE Act lays the groundwork for a world where AI innovation makes life better instead of putting it at risk," he added. "With this law, we make clear that tech innovation and safety don’t have to be at odds. In New York, we can lead in both.”

New York's alignment with California on AI safety may lift some perceived patchwork burdens off major AI developers. OpenAI and Anthropic expressed support for the RAISE Act, with both indicating to The New York Times that having similar legislation in two large state economies is good for the policy landscape overall.

"While we continue to believe a single national safety standard for frontier A.I. models established by federal legislation remains the best way to protect people and support innovation, the combination of the Empire State with the Golden State is a big step in the right direction," OpenAI Chief Global Affairs Officer Chris Lehane told the NYT.

For AI safety advocates, the measure may be seen as the floor for AI regulation.

"Transparency is a baseline for any form of oversight and accountability for the development and deployment of AI tools. We applaud California and New York on the passage of bills that take initial steps towards the mitigation of some of the harms AI systems can engender," Center for Democracy and Technology Director for State Engagement Travis Hall said in a statement to the IAPP. "But they should be seen as the starting point, not the finish line for legislation."

The White House executive order is not the only avenue being pursued to tamp down laws — at the state or federal level — that seek to put guardrails in place.

Lobbying groups representing tech companies resisted the RAISE Act and are promising to ramp up pressure against lawmakers putting guardrails on AI. CNBC reported in November that a bipartisan super PAC Leading the Future would target state Assemblymember Bores during his U.S. congressional campaign. The PAC called the prior version of the bill a "clear example of the patchwork, uninformed, and bureaucratic state laws that would slow American progress and open the door for China to win the global race for AI leadership."

Caitlin Andrews is a staff writer for the IAPP.