Editor's note: This article is one of a two-part series exploring the state data privacy and sectoral laws and AI bills in the 2024 U.S. state legislative session. The authors also discussed the year in state privacy with IAPP Editorial Director Jedidiah Bracy.

The 2024 state legislative session brought with it not only more comprehensive consumer data privacy laws, but also more bills and laws that seek to regulate specific types of data such as consumer health data, industries such as data brokers and technologies like artificial intelligence systems.

Although some sectoral laws, such as the Illinois Biometric Information Privacy Act, have been on the books for many years, the rapid emergence of sectoral-based privacy and technology bills and laws really began at the end of the 2022 legislative session with the enactment of the California Age-Appropriate Design Code Act. The following year saw numerous sectoral privacy bills passed into law, including Washington state's My Health My Data Act, the Oregon and Texas data broker laws, and consumer health and children's privacy amendments to the Connecticut Data Privacy Act.

These trends continued in 2024, with children's privacy and safety bills receiving the most attention as lawmakers across the country and party lines voiced strong concerns about how technology, particularly social media, impacts the mental health of teens and children. Further, if the avalanche of state data privacy and sectoral laws was not enough for privacy professionals to handle, 2024 saw an influx of hundreds of AI bills, which the IAPP's own research indicates companies are looking to privacy pros to handle.

Children's data: The search for a constitutional approach

The regulation of children's access to and use of the internet continues to be a hot-button issue for state lawmakers. The topic often invokes visceral reactions from lawmakers, parents and companies as they struggle to find a framework that meets concerns while also passing constitutional muster.

Drafting children's focused legislation involves balancing interests, rights and obligations among four actors: parents/guardians, children, the companies that provide online products and services to children, and the government. For laws such as the federal Children's Online Privacy Protection Act, the balancing of these interests weighs in favor of parents/guardians who must provide verified parental consent before the personal data of their children, who are under 13 years old, is processed.

Some of the recent state bills/laws, particularly those focused on social media companies, extend that same parental consent structure to children under 18. However, as middle schoolers mature into high schoolers, the balance of interests between the parent and child becomes more nuanced. That is particularly true for teens who are searching for online communities, such as LGBTQ+ forums their parents may disapprove of.

Other laws have eschewed the parent/child balancing of interests in favor of requiring companies to determine the best interests of the child and design their services accordingly. However, these laws are typically enforced by state attorneys general, so it is ultimately left up to the government to determine what constitutes best interests. That determination can change dramatically depending on the state and attorney general at issue, leaving companies with a compliance headache.

A related concept is the scope of these laws and the distinction between regulating data processing activities, such as companies only collecting personal data necessary to provide services and regulating safety issues, e.g., requiring companies to prevent harmful content from reaching children. Lawmakers have also struggled with the issue of whether or not to require businesses to verify their users' ages to apply enhanced protections for children. While age verification can help ensure young users experience safer online experiences, it can create obstacles for all users' ability to access lawful information and introduce new privacy risks. Both of these issues have been the subject of First Amendment scrutiny by the courts.

Age-appropriate design code acts

Although there are child-focused data laws, such as California's Erasure Law and student online personal information-related laws, that predate 2022, the modern age of such laws begins with the passage of the California Age-Appropriate Design Code Act in 2022. Although the AADCA passed unanimously out of the California legislature, it was immediately met with a lawsuit filed by internet trade association group NetChoice, which claimed the law was unconstitutional.

In September 2023, the U.S. District Court for the Northern District of California granted NetChoice's motion for preliminary injunction, finding it was likely to succeed on its claim that the vast majority of the law is unconstitutional and that the constitutional provisions cannot be severed from the unconstitutional ones. The California attorney general's office subsequently appealed to the 9th U.S.  Circuit Court of Appeals.

In August 2024, the 9th U.S. Circuit Court of Appeals affirmed the district court's injunction, in part, finding the AADCA's data protection impact assessment provision unconstitutional. The appellate court remanded the case to the district court to consider NetChoice's challenges to the act's other provisions and whether the law's constitutional provisions could be severed from its unconstitutional provisions. Importantly, the appellate court distinguished between assessment requirements laws that only seek to regulate data processing, such as the California Consumer Privacy Act and laws that seek to regulate content, such as the AADCA, concluding that the former are more likely to survive constitutional scrutiny. The court thereby provided a roadmap to lawmakers for drafting constitutional children's privacy laws.

Undeterred by the constitutional issues surrounding the AADCA framework, this year Maryland lawmakers passed their own AADCA-based bill: the Maryland Age-Appropriate Design Code Act, House Bill 603 and Senate Bill 571, also known as the Maryland Kid's Code. It is similar to the AADCA, but Maryland lawmakers made modifications in an attempt to avoid a constitutional challenge.

Like its California counterpart, Maryland's law requires covered businesses to conduct DPIAs but the topics businesses must consider were narrowed. However, a plaintiff may still argue at least one topic is a proxy for content-based regulation. The law also restricts how businesses can collect and process the personal data of children under 18 years of age, including not processing the personal data of a child "in a way that is inconsistent with the best interests of children likely to access the (business's) online product." However, unlike California, Maryland's law does not require businesses to determine the age of their users or give the attorney general authority to second guess whether companies' content moderation decisions comply with posted terms of service. Maryland's law, effective 1 Oct. 2024, has not been challenged in court to date.

The New York approach

This year, New York lawmakers enacted two bills directed at kids' use of online technologies: the New York Child Data Protection Act and the Stop Addictive Feeds Exploitation for Kids Act. Among other provisions, the New York Child Data Protection Act severely restricts the ways in which covered operators can collect and process the personal data of children under 18 years of age.

Operators' processing activities must either be strictly necessary for certain specified activities or they must obtain informed consent. Operators are also prohibited from purchasing, selling or allowing a processor or third-party operator to purchase or sell the personal data of children under 18. In a novel provision, the law requires operators to treat users as under 18 if a user's device communicates or signals the user is or shall be treated as a minor through a browser plug-in or privacy setting, device setting or other mechanism that complies with attorney general regulations. We are aware of no such viable age signal when drafting this article.

The SAFE for Kids Act is a social media law that applies to "any person, business, or other legal entity, who operates or provides an addictive social media platform." An addictive social media platform is a "website, online service, online application, or mobile application, that offers or provides users an addictive feed as a significant part of" its services. Among other provisions, it is unlawful for a covered operator to provide an addictive feed to a covered user unless that operator has used commercially reasonable and technically feasible methods to determine the user is not under 18 or has obtained verified parental consent. The New York attorney general's office recently posted an Advanced Notice of Proposed Rulemaking for both New York child privacy and safety laws.

The New York laws are notable for containing a significant conceptual divergence for balancing the equities in child online privacy and safety. While the New York Child Data Protection Act trusts teens to make their own choices about the use of their personal information, the SAFE for Kids Act instead requires teens to obtain parental consent to access social media content that is algorithmically curated.

The Connecticut/Colorado approach

Another significant development in children's privacy law was the passage of Colorado's SB 41 sponsored by Sen. Robert Rodriguez, D-Colo. The bill is based on a 2023 children's privacy bill passed in Connecticut. It amends the Colorado Privacy Act to create a duty of care for operators to avoid heightened risks of harm to children using their services. It also adds additional protections for children's data such as requiring children between the ages of 13 and 17 to opt into targeted advertising and the sale of their personal data. It remains to be seen whether the Colorado and Connecticut amendments will be used as a template for other states to provide greater privacy protections for children.

Further, as noted previously, Virginia lawmakers passed a bill amending Virginia's consumer data privacy law to add additional provisions related to children's privacy. This year California lawmakers also passed Assembly Bill 1949, which would have amended the CCPA to add additional protections for children including communicating age through device signals, similar to the New York Child Data Protection Act. However, California Gov. Gavin Newsom vetoed it, expressing concern about the potential unintended consequences of requiring businesses to distinguish between minor and adult users at the point of collection.

Social media laws

Other states have sought to address children's online privacy and safety issues by passing laws specifically aimed at social media companies. In Utah, lawmakers took a second try at passing social media legislation. In 2023, it became one of the first states to regulate social media companies' treatment of children with passage of SB 152 and HB 311. Together, the bills enacted the Utah Social Media Regulation Act, Utah Code §§ 13-63-101 to 701. However, in the face of a lawsuit filed by NetChoice, lawmakers decided to repeal and replace the law with the Utah Minor Protection in Social Media Act, SB 194 and HB 464, which addresses harm to minors by algorithmically curated social media services. In early September, a federal district court enjoined the act, finding it unconstitutional on various First Amendment grounds.

In late August, a federal district court partially enjoined Texas' social media regulation law, HB 18. The court found the law's "monitoring-and-filtering" requirements could not survive First Amendment scrutiny but the other challenged provisions of the law were not, at least at the preliminary injunction stage, unconstitutional. The Texas attorney general appealed the decision to the 5th Circuit Court of Appeals. NetChoice has similarly succeeded in preliminary injunctions against social media age verification laws in Mississippi, Ohio and Arkansas.

Biometric privacy: A growing concern but with a twist

The evolution of biometric-specific privacy laws in the U.S. has been uneven at best. For years only three states had biometric privacy laws: Illinois, Texas and Washington. In the 2023 session, the American Civil Liberties Union lobbied state lawmakers to pass free-standing biometric privacy bills. While a number of state lawmakers ran those bills, none have passed. Rather, many states passed broader consumer data privacy laws that require controllers to obtain consent before processing biometric data and other categories of sensitive personal information.

This approach changed somewhat in 2023 with the passage of the Washington state's MHMDA, which considers biometric data to be consumer health data and, therefore, regulated under the law and subject to its private right of action. Nevada passed a similar law, although it does not have the PRA. This year, Colorado lawmakers continued this trend by amending the Colorado Privacy Act to add new biometric privacy provisions. Although the law already requires consent to collect biometric data, the amendment extends protections to more types of companies and extends requirements to employee data.

Another important legislative development in 2024 for biometric privacy laws was Illinois passing an amendment to BIPA. The impetus to amend BIPA arose out of the Illinois Supreme Court's 2023 ruling in Cothron v. White Castle System, in which the court held "the plain language of (BIPA) section 15(b) and 15(d) shows that a claim accrues under (BIPA)with every scan or transmission of biometric identifiers or biometric information without prior informed consent." In its briefing, White Castle estimated such a finding would expose it to "annihilative liability" in excess of USD17 billion.

The BIPA amendment bill addressed this issue by amending section 15(b) to say a "private entity that, in more than one instance, collects, captures, purchases, receives through trade, or otherwise obtains the same biometric identifier or biometric information from the same person using the same method of collection in violation of [section 15(b)] has committed a single violation of [section 15(b)] for which the aggrieved person is entitled to, at most, one recovery under this Section." The bill makes a similar change to section 15(d). The amendment also updates BIPA's definition of a written release to include an "electronic signature."

Finally, 2024 saw a major biometric enforcement action with the Texas attorney general securing a first-of-its-kind USD1.4 billion settlement with Meta for alleged violations of the Texas Capture or Use of Biometric Identifier Act. In general, CUBI prohibits a person from capturing a biometric identifier for a commercial purpose unless they obtain the individual's informed consent. The Texas attorney general's complaint alleged Meta's since-abandoned use of "tag suggestions" violated CUBI's informed consent provisions. Notably, as part of its settlement, the Texas attorney general entered into a novel "safe harbor" program with Meta, permitting the company to obtain binding attorney general guidance about the potentially applicability of the state's biometric privacy law to the company's products and services.

Consumer health data privacy: Much ado about nothing?

Perhaps the biggest development in state consumer health data privacy was the PRA in Washington's MHMDA that went into effect in March. Many commentators believed the PRA's effective date would bring with it an avalanche of litigation. However, as of this writing, we are not aware of any MHMD lawsuits being filed, either by private litigants or the state attorney general's office. The exact reason for the lack of private litigation is unknown; however, many believe the law's absence of statutory damages may have chilled the plaintiff bar's appetite for aggressive litigation, at least for the time being.

Nevada's PRA-less copycat bill, SB 370, also went into effect this year. Like with the MHMD, there have been no known enforcement actions brought by the Nevada attorney general's office.

Although no state passed a MHMD copycat law in 2024, many bills were introduced, including in Hawaii, Illinois, Mississippi and Washington, D.C. For the second year in a row, New York lawmakers moved a consumer health data privacy bill through the state Senate but it failed to pass the Assembly. New York's ongoing health privacy effort is worth tracking, as the framework includes sweeping and unique proposals, including a mandatory 24-hour delay in obtaining user consent to offer a service that uses health information.

At the periphery of consumer health data privacy bills, Colorado amended the Colorado Privacy Act with a first-of-its-kind bill to add neural data as a category of sensitive data, thereby requiring controllers to obtain consent for its collection. However, the true impact, if any, of the bill remains to be seen as it requires neural data to be "used or intended to be used, singly or in combination with other personal data, for identification purposes."

California lawmakers passed a similar bill amending the CCPA's definition of sensitive personal information to include neural data, which it defines as "information that is generated by measuring the activity of a consumer's central or peripheral nervous system, and that is not inferred from nonneural information." While the CCPA amendment does not link the treatment of neural data to identification, it only grants consumers the right to limit a business's use of sensitive personal information to certain types of processing activities. It also only grants the right to limit a business's use of sensitive personal information if the business uses neural data to infer characteristics about the individual.    

Data brokers: An off year from new laws

In 2023, Oregon and Texas enacted data broker registration bills, and California amended its existing law to create additional requirements. Based on this activity, we expected more states to enact data broker registration laws in 2024. However, while states like Alabama, Hawaii, Illinois, Tennessee and Washington ran bills, none saw movement. Nonetheless, data broker regulation continues to gain steam in California, where the California Privacy Protection Agency initiated formal rulemaking on new data broker regulations.

Of particular interest, the proposed regulations define the term direct relationship, which appears undefined in the California data broker law. To be a data broker, businesses must not have a direct relationship with the consumers whose data they collect. Therefore, the regulation's definition of direct relationship becomes a trigger for how broadly the data broker law will apply.

The proposed definition potentially sweeps in many more companies than the legislature likely intended. In particular, the definition states a business is still a data broker "if it has a direct relationship with a consumer but also sells personal information about the consumer that the business did not collect directly from the consumer." Given that organizations determined to be data brokers will ultimately be subject to bulk deletion requests, which could impact information users affirmatively provide to companies, this is likely to be a substantially contested definition with significant impacts on the ultimate exercise of consumer rights in California.

AI: The next wave

As a starting point, state lawmakers have attempted to address AI legislation across state lines to drive interoperability. Led by Sen. James Maroney, D-Conn., author of the Connecticut Data Privacy Act, a bipartisan group of state lawmakers from more than 20 states formed a work group facilitated by the Future of Privacy Forum. State lawmakers met routinely in the second half of 2023, hearing from AI experts in multiple fields and disciplines. Ultimately, Maroney drafted a bill focused primarily on mitigating algorithmic discrimination in consequential decision-making systems. The bill passed in the Connecticut Senate, but House leadership refused to bring it to a vote after Connecticut Gov. Ned Lamont signaled he would veto the bill.

At the same time in Colorado, Sen. Rodriguez — a member of the multistate workgroup — introduced a bill that tracked the Connecticut bill. With the backing of the state attorney general's office and strong House co-sponsors, Rodriguez was able to secure passage of the Colorado AI Act. Gov. Jared Polis eventually signed the bill with reservations, and Colorado became the first state to pass legislation to mitigate algorithmic discrimination for high-risk processing activities. As part of its passage, Colorado lawmakers also passed a work group bill aimed at considering amendments to the law prior to its 1 Feb. 2026 effective date. The work group held its first meeting on 29 Aug. The act also grants rulemaking authority to the Colorado attorney general's office, although it is expected the office will wait until after the 2025 legislative session before initiating rulemaking.

Conversely, Utah lawmakers passed a narrow law regulating the private sector's use of AI. Among other provisions, the law specifies Utah's consumer protection laws apply equally to an entity's use of generative AI as they do to its other activities and requires private sector entities to take steps to disclose and/or respond to inquiries about the use of generative AI.

Illinois lawmakers also passed a narrow bill, HB 3773, which amends the Illinois Human Rights Act to regulate the use of AI in certain employment settings. The bill adds to the list of prohibited activities by forbidding employers from using AI that "has the effect of subjecting employees to discrimination on the basis of protected classes under (the Human Rights Act) or to use zip codes as a proxy for protected classes under" the Human Rights Act specifically with respect to "recruitment, hiring, promotion, renewal of employment, selection for training or apprenticeship, discharge, discipline, tenure, or the terms, privileges, or conditions of employment."

California lawmakers considered many bills that would regulate the private sector's use of AI. Ultimately, lawmakers passed four significant bills, but one was vetoed by Gov. Newsom. The three bills signed into law are:

  • AB 2013, which provides that developers of generative AI systems or services must post documentation on their websites regarding the data used to train the system or service before making their systems or services publicly available to Californians.
  • AB 2885, which amends California law to define AI as "an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments."
  • #SB 942, which creates transparency obligations for persons that create, code or otherwise produce generative AI systems that have one million monthly visitors or users and are publicly accessible within California's geographic boundaries. This bill is entitled the California AI Transparency Act.

Newsom vetoed the Safe and Secure Innovation for Frontier Artificial Intelligence Act, SB 1047. That bill would have created sweeping regulations for the largest AI models such as developing a security and safety protocol and maintaining a kill switch for covered systems.

As noted in our prior article, lawmakers also passed, and Newsom signed, AB 1008, which provides that personal information can exist in various formats, including AI "systems that are capable of outputting personal information."

Three notable AI-related bills that did not pass the legislature were AB 2930 on regulating algorithmic discrimination in automated decision tools, AB 3211 on provenance, authenticity and watermarking standards, and AB 1791 on digital content provenance for social media platforms. Of note, as originally introduced, AB 2930 was similar to the Colorado AI Act in scope. However, the bill was eventually narrowed to only apply to employment-related decisions before being withdrawn by the sponsor.

Conclusion

The continuing emergence of state sectoral and AI-related laws has added a new and complex compliance wrinkle for companies determining their data privacy obligations. Further, the passage of these laws has only gained more steam over the past few years as state lawmakers have sought to address perceived harms beyond the general collection and processing of consumer personal data. There is no reason to believe this momentum will not continue in the coming legislative sessions absent federal preemptive action.

David Stauss, CIPP/E, CIPP/US, CIPT, FIP, PLS, is a partner at Husch Blackwell.

Keir Lamont, CIPP/US, is senior director of the U.S. Legislation team at the Future of Privacy Forum.