As U.S. Congress weighs blocking state-level artificial intelligence law enactment and enforcement, New York is moving forward with potential landmark legislation to regulate frontier AI models.
The Responsible AI Safety and Education (RAISE) Act passed through the New York Senate 12 June on a near-unanimous vote, sending it to the governor's desk. The bill targets large companies developing AI models over specified computational cost and operations thresholds that carry the capacity to cause harm.
Covered entities would be required to adopt safety and security protocols before a model is released and make the measures available to relevant authorities. The bill also compels developers to conduct annual safety reviews and disclose safety incidents.
The office of Gov. Kathy Hochul, D-N.Y., told the IAPP she will review the legislation. Under New York state law, the governor has 10 days not including Sundays to sign, veto or allow to become law without her signature.
State Assemblymember Alex Bores, D-N.Y., sponsored the bill in the Assembly. He told the IAPP he views the bill as being aligned with New York's other AI efforts, including the Empire AI Consortium research efforts enacted in this year's budget. However, he had not heard from Hochul on whether she would sign the RAISE Act into law.
According to Bores, the bill largely seeks to replicate many of the voluntary commitments several Big Tech companies made during former U.S. President Joe Biden's administration and during the AI Seoul Summit in 2024, such as the safety plans and red teaming of models. Several AI companies publish their safety plans online.
"Even though they made these commitments in the past, we saw behavior that didn't add up to those commitments," Bores said. "We're holding them to even less than what they promised but establishing some baselines."
The bill puts the onus on developers to anticipate whether their models could cause critical harm, defined here as the death or serious injury of 100 or more people or at least USD1 billion in damages; creating of chemical, biological or nuclear weapons; acts without meaningful human intervention or could help a person commit a foreseeable crime.
Violating the law would allow the state's attorney general to seek a civil penalty of up to USD10 million for the first offense and up to USD30 million for subsequent infractions. The law is not intended to establish a private right of action, and it does not limit the application of other relevant laws.
The RAISE Act would prevent a repeat of how social media's harms was handled by policymakers, where no significant legislation has been passed, said Adam Billen, vice president of public policy at Encode AI and a proponent of the bill.
"Today, by passing the RAISE Act, the legislature has shown that it is committed to proactively safeguarding New Yorkers from AI harms, rather than waiting until they’re already here," he said in a statement to the IAPP.
But industry advocates, including the AI Alliance Association, said it is impossible for developers to foresee every harm their products could be used for, something it argues will curb open-source development. Rather than create safer products, the group indicated advanced AI models will likely not be offered in New York instead.
"To the contrary, the NY RAISE Act would create a reporting and compliance bureaucracy that would distract from existing risk identification and mitigation efforts. The 'safety and security protocol,' defined in the Act as a reporting mechanism for transparency, introduces layers of busy work — without clear standards for satisfactory procedures and mitigations," the association, which includes IBM, Meta and Oracle among its members, said in a letter to New York legislative leaders.
Caitlin Andrews is a staff member for the IAPP.