The 2023-24 legislative session in California saw a flurry of privacy and artificial intelligence bills signed into law, ranging in scope from amending a single definition to creating broad legislative provisions in areas like student privacy, dataset transparency and more. However, one thread connects these new laws: increasing privacy protection for all Californians. California has a long history of leading legislative efforts at a state level, especially in the privacy and technology realms; it was the first to enact an omnibus privacy-related law with the California Consumer Privacy Act and the first to create an entirely privacy-focused agency with the California Privacy Protection Agency.

Expanding definitions

One way California is expanding privacy protection is by broadening the scope of its existing laws. To that end, Gov. Gavin Newsom signed two bills that amended the definition of personal information in the CCPA, as amended by the California Privacy Rights Act.

Assembly Bill 1008 adds that personal information can exist in multiple formats, which include abstract digital formats like AI systems "that are capable of outputting personal information." It also clarifies that biometric information collected without a consumer's knowledge is not publicly available, and thus falls into the definition of personal information. Senate Bill 1223, which follows in the footsteps of Colorado's SB 21-190, states sensitive personal information includes a consumer's neural data.

Both new laws have received a mixed response — AB 1008 has raised serious questions among industry professionals about, for example, whether AI models themselves contain personal information. Some commentators praised SB 1223 for safeguarding consumers' brains from enterprising new technologies, while others stated its ambiguous phrasing did not go far enough to protect consumer data.

Privacy advocates like the Neurorights Foundation welcome the increased protections, but some argue the laws may need further refinement to address the rapidly evolving landscape of privacy, AI and neurotechnology. The tech industry, particularly companies developing these technologies, is likely to face compliance challenges as it navigates these new regulations.

On a smaller and more specific scale, when a company has a consumer's personal information for which they have opted out of processing and then undergoes a merger or acquisition, AB 1824 requires the company to honor the opt-out request. This clarifies a gap in previous legislation and prevents the potential erosion of privacy rights due to business restructuring. While this law may create some administrative challenges for companies involved in mergers and acquisitions, it aligns with the broader intent of California's privacy laws to give consumers greater control over their personal information.

Children's privacy

This year's session also saw several laws aimed toward protecting children's privacy. AB 801 is another more substantive amendment to the CCPA that requires operators of online services to delete a student's information upon request if the student is no longer enrolled in a local school. However, its impact will likely pale in comparison to SB 976, the Protecting Our Kids from Social Media Addiction Act, which prohibits providing addictive feeds to minors or sending them notifications during school hours or at night without parental consent.

California's SB 976 is similar to the Stop Addictive Feeds Exploitation for Kids Act, signed into law in New York in June 2024. Currently, SB 976 only applies to known minors, but by 1 Jan. 2027, online service providers must obtain parental consent or otherwise "reasonably determine" that a user is not a minor before allowing the user to access an addictive feed. It also mandates that operators implement parental controls, including limiting a child's access to one hour per day by default.

These laws will have far-reaching implications for kids and online service providers subject to the CCPA. Despite broad bipartisan support for both bills, privacy advocates are concerned SB 976 is vulnerable to constitutional challenges like infringement on a child's right to free speech via social media. California's Age Appropriate Design Code has already faced such a challenge, which led to the court enjoining part of the law. Many, including the American Civil Liberties Union, adult websites and experts in the field, have questioned the practicality and privacy implications of age verification systems. However, supporters maintain that these laws are necessary steps to protect children from the potential harms of unchecked social media use and data collection.

Although these bills face scrutiny on all sides, they nevertheless address the growing concerns with the privacy and well-being of minors, much like age-verification laws in Kansas, Idaho and Florida. Whether by creating delineated services for teens like Instagram's Teen Accounts or by raising the minimum age required to use a service, operators may have to modify current frameworks to comply with the requirements in SB 976.

Legislating AI

The batch of new AI-related laws in California has shown its legislature will stay vigilant as the field of AI continues to grow and solidify. To that end, AB 2885 defines what AI and automated decision systems mean to provide a foundation and common understanding of what these terms signify in California law.

The definition of AI is relatively broad, encompassing any "engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments."

This definition is close to the U.S. National Institute for Standards and Technology's definition, also adopted by the Organisation for Economic Co-operation and Development and the EU AI Act, which is "a computational process derived from machine learning, statistical modeling, data analytics, or artificial intelligence that issues simplified output, including a score, classification, or recommendation, that is used to assist or replace human discretionary decisionmaking and materially impacts natural persons."

It also mirrors the intent of Recital 12 in the EU AI Act, which tries to exclude simpler legacy AI systems from regulatory burdens. Industry experts have long struggled with the patchwork of privacy laws around the U.S. and globally, so standardizing this definition across California law has provided some clarity for businesses that must determine what laws apply to their systems.

Further advancing its mission of consumer protection this year, California also passed some broad laws requiring transparency from AI developers and companies. SB 942 requires content produced by generative AI to be disclosed as created or altered by AI in a manner that is "clear, conspicuous, appropriate for the medium of the content, and understandable to a reasonable person."

The bill also requires covered generative AI providers to make a publicly accessible tool available to detect whether content has been generated or altered by generative AI. While only providers of generative AI tools will see an immediate compliance obligation, this could mean different generative AI tools temporarily or permanently exiting the market in California if they cannot or do not wish to comply with the bill.

AB 2013 requires the developer of any generative AI tool available to Californians to post documentation about the data used to train the AI system. This would mean previously undisclosed information would have to be publicly available, such as the sources or owners of the datasets; whether the datasets include any data that is protected by copyright, trademark or patents; whether the datasets were purchased or licensed; and whether there is personal information in the dataset.

Generative AI developers rarely disclose this information, and the inclusion of unlicensed, publicly available data is at the heart of several court cases. For organizations using but not developing covered AI systems, this provides greater transparency into available systems and allows them to make more informed decisions about, for example, whether they might be exposed to potential further liability by accidentally infringing upon trademarked or copyrighted materials with a generative AI system.

California lawmakers have also taken proactive steps in the more targeted area of integrating AI into classrooms. In an effort to increase working knowledge and develop guidance for schools and educators, SB 1288 provides for the creation of a working group that will investigate AI-enabled teaching and learning practices.

See the California Privacy and AI Legislation Tracker for a full list of privacy and AI bills adopted and not adopted from 2019-2024.

Looking forward

California is clearly committed to keeping up with the times, both by passing new laws and amending current ones to proactively address privacy and AI governance concerns in the tech industry at large.

Notably, Newsom vetoed the comprehensive Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, citing reservations about "stringent standards (for) even the most basic of functions" and a lack of an "empirical trajectory analysis" upon which to base the legislation's protocols and requirements, but his statement explicitly left the door open for other bills that would address these concerns.

Indeed, he stated he recognized the need for legislation in this area soon before any potential "major catastrophe" happens. Additionally, the multiple smaller AI bills he signed prove governing AI, and specifically protecting California constituents from any potential negative impact, is clearly on the legislature's collective mind.

As these laws come into effect in 2025 and 2026, they will likely shape conversations on AI regulation both because of the central tech hub in Silicon Valley and because of the California effect, in which the strictest legislation in an area sets a precedent that later developments respond to and build upon, including across other U.S. states.

C. Kibby is a Westin Research Fellow and Richard Sentinella is the AI Governance Research Fellow for the IAPP.