Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.
As the first comprehensive framework for the regulation of artificial intelligence, when the EU's AI Act took effect in August 2024, it was anticipated to catalyze other AI governance frameworks in the same way the EU General Data Protection Regulation inspired privacy laws around the world, like the California Consumer Privacy Act.
California has once again followed suit with SB 53, known as the Transparency in Frontier Artificial Intelligence Act, which was signed into law 29 Sept. However, most organizations will not have to worry about SB 53 compliance as it is written today because the law exclusively applies to the largest and most powerful AI systems and their creators.
Scope
The scope is the starkest contrast between the EU and California laws. The EU AI Act casts a wide net by regulating the entire AI ecosystem, everything from developers of AI systems — known as "providers" — to organizations that use an AI system in their operations — or "deployers." It focuses on general-purpose AI systems trained using 10 to the power 25 floating-point operations per second.
Senate Bill 53, on the other hand, applies to "frontier developers" and "large frontier developers," both of which train, or initiate the training of a "frontier model," defined as a "foundation model that was trained using a quantity of computing power greater than 10^26 integer or floating-point operations," including the computer power used in "subsequent fine-tuning, reinforcement learning, or other material modifications."
For an AI model to qualify as a foundational model, it must be trained on a broad data set, designed for generality of output, and adaptable to a wide range of distinctive tasks. A frontier developer gets scoped as a large frontier developer when, together with its affiliates, it collectively sees annual gross revenues over USD500 million in the preceding calendar year.
Significantly, SB 53 is very unlikely to apply to organizations that would be considered deployers under the EU AI Act. When an organization, for example, integrates a frontier model into an application via the model's application programming interface, it is not training a new model but calling an existing one. Even if a deployer organization does subsequently modify the model by, for example, fine-tuning it, the modification would have to be "material," and it would be unrealistic for it to cause the organization to meet the 10 to the power 26 FLOPs threshold.
Risk management
Both SB 53 and the EU AI Act require formalized risk management processes and documentation. SB 53 requires large frontier developers to publicly publish on their websites a "frontier AI framework," a transparency report meant to communicate the intended uses of the frontier model and the organization's approach to "catastrophic risk" management.
A "catastrophic risk" is the "foreseeable and material risk that a frontier developer's development, storage, use, or deployment of a frontier model will materially contribute to the death of, or serious injury to, more than 50 people or cause more than one billion dollars in damage to, or loss of, property arising from a single incident." It explicitly covers risks including providing expert-level assistance in creating chemical, biological, radiological or nuclear weapons, engaging in cyberattacks or physical crimes committed by a human, or evading human control.
In contrast, the EU AI Act requires more detailed, prescriptive risk management requirements for high-risk AI systems, including training data quality, technical documentation, and continuous monitoring and reporting mechanisms. Under the act, a high-risk AI system must also undergo a conformity assessment, which shows the system meets the legal requirements, before it's placed on the market.
Enforcement
Incident reporting requirements vary greatly between the two AI governance frameworks. Under SB 53, organizations must report "critical safety incidents" to the California Governor's Office of Emergency Services within 15 days or 24 hours if there is imminent risk of death or serious physical injury. The EU AI Act requires providers of high-risk AI systems to report "serious incidents" as soon as the provider realizes the system caused or was reasonably likely to cause a serious incident.
The EU AI Act defines "serious incident" much more broadly than SB 53 defines "critical safety incident," accounting for factors like environmental damage and the infringement of fundamental rights.
Both frameworks have teeth when it comes to enforcement and penalties for non-compliance. SB 53 grants the California attorney general exclusive authority to bring civil actions against large frontier developers with penalties up to USD1 million. Violations include failing to publish required disclosures, misrepresenting frontier model risks or compliance with its frontier AI framework, failing to comply with its framework, or improperly reporting incidents. Enforcement is scoped only to large frontier developers: the law is silent on enforcement against organizations that qualify as frontier developers.
Penalties under the EU AI Act, on the other hand, depend on what part of the act has been violated. Violating the framework's prohibited AI restrictions can cost an organization up to 35 million euros or 7% of its total worldwide annual turnover for the preceding financial year. Penalties for violations of most of the act's other obligations can be up to 15 million euros or 3% of an organization's total annual turnover, whichever is higher. Moreover, if an organization responds to a regulatory request with false, incomplete or misleading information, the penalty can reach 7.5 million euros or 1% of the total annual turnover, whichever is higher.
Whistleblower protections
Both frameworks provide significant whistleblower protections. SB 53 imposes different requirements on frontier developers and large frontier developers, with stricter standards for the latter. Both types of frontier developers can be subject to civil actions or administrative proceedings for violations; penalties include attorney's fees and injunctive relief. Starting in 2027, the California attorney general will publish aggregated and anonymized annual reports on whistleblower activity.
The EU AI Act leverages the broader EU Whistleblower Directive, requiring internal and external reporting channels and strong anti-retaliation protections. The directive will explicitly cover reporting violations beginning 2 Aug. 2026, but whistleblowers may already benefit from protections if they report relevant concerns under other categories already in scope, like product safety, consumer protection or data protection.
Achieving compliance
SB 53 and the EU AI Act are both AI governance frameworks, but the similarities do not extend much further. Organizations within scope of both laws will need AI governance programs that integrate both frameworks and should begin enhancing their programs to simultaneously account for both to ensure international compliance.
Haley Fine, CIPP/E, is associate general counsel, privacy at SiriusXM.