Resource Center / Resource Articles / The Kids Are All Rights
The Kids Are All Rights: The Conflict between Free Speech and Youth Privacy Laws
This article analyzes the complex relationship between First Amendment rights and the future of youth privacy and safety laws.
Published: September 2023
Contributors:
Navigate by Topic
In 2022, youth privacy legislative proposals began to boom across several states. From California's Age Appropriate Design Code Act to Utah's contentious Social Media Regulation Act and even to Louisiana’s HB 61 for online services contracts, states are introducing a full range of laws aimed at protecting minors' experiences online. To do so, they are deploying privacy tools coupled with standards for safety and mental health, putting the onus on companies to ensure they design with the "best interests" of young users in mind.
However, as evidenced by NetChoice's swiftly filed suit responding to the California AADC, a major legal debate around youth safety laws, focused on whether they conflict with free speech protections, is taking place. In district courts across the country, these laws are being challenged at the intersection of privacy rights and rights under the First Amendment to the U.S. Constitution.
The rubber is meeting the road now that court challenges are resulting in injunctions, most recently in a preliminary injunction of California's AADC ordered by Judge Beth Labson Freeman of the U.S. District Court for the Northern District of California. The order temporarily blocks the AADC from going into effect, pending the ongoing battle over the merits of the case.
In issuing the preliminary injunction, the judge peeked behind the curtain at the merits of the case and concluded NetChoice will likely succeed in arguing that the law violates First Amendment protections. This outcome is not unexpected, but the breadth of the judge's reasoning is surprising to many court watchers. The analysis in this case, if accepted by higher courts, has drastic implications for any law that restricts the collection and sharing of personal data in the U.S. Although the outcome could change, it is unlikely the law will go into effect in July 2024 as planned.
It is important for practitioners and policymakers alike to understand how U.S. free-speech protections can lead to these outcomes—and how future legislative proposals could avoid these pitfalls.
Background: Common requirements in new youth privacy laws
Although often not strictly focused on data privacy, the new wave of youth safety laws is important for privacy professionals because they usually impose changes in privacy programs for any company that interacts with young consumers.
One distinguishing characteristic of California's AADC is its data privacy impact assessment mandate. Any covered business with services likely to be accessed by children must add an analysis of any risk of material detriment to children arising from its data management practices to the DPIA of that service. The DPIA requirement has been at the core of the free-speech debate around the California AADC because it asks companies to mitigate or eliminate identified harms, which would likely include adjusting content for at least one subset of users. As explained below, government restrictions on content are often disallowed by the First Amendment.
Other new obligations from this law and similar laws include age verification requirements, "dark pattern" restrictions, default privacy settings, and limitations on the collection and storage of youth data.
-
expand_more
Age assurance requirements
Nowhere are youth safety laws more in tension with youth privacy than in how the laws require and define the processes of age assurance. Whether by declaration, estimation or verification, implementing an effective age-gate depends on a mechanism for assessing user age. Implementing any such process, however, means balancing the reliability of the age assurance with interests of privacy and convenience. Legislators approach mandating a certain method with hesitancy, often opting instead for broader, vaguer language about how organization must age-gate their systems. For example, the California AADC requires a user's age to be estimated with a "reasonable level of certainty appropriate to the risks that arise from the data management practices of the business or apply the privacy and data protections afforded to children to all consumers." Likewise, Arkansas' law requires a "reasonable age verification," but further clarifies that the platform must use a third-party vendor to conduct age verification. In addition to requiring either a digitized identification card or a government-issued ID, Arkansas specifies that a platform can implement "any commercially reasonable age verification method."
As introduced in recent Maryland legislation, affected platforms would be required to either estimate the age of "child users" with this same reasonable level of certainty or, by default, "apply to all consumers the privacy and data protections afforded to children," mirroring California's default protections approach. Texas' recently introduced SB 2164 comes close to defining a required age-assurance method in more specific terms, while remaining broad. The Texas bill would require any entity that performs "age verification" to "provide digital identification; or comply with a commercial age verification system that verifies age using: (A) government-issued identification; or (B) a commercially reasonable method that relies on public or private transactional data to verify the age of an individual."
-
expand_more
Prohibition of dark patterns
Likewise, many recent legislative frameworks either directly or indirectly target the use of manipulative design practices, so-called dark patterns. Florida's law creates its own definition of dark patterns, while referring to U.S. Federal Trade Commission guidance. Connecticut's bill mirrors Florida's by requiring no provider "use any user interface designed or manipulated with the substantial effect of subverting or impairing user autonomy, decision-making or choice," which includes the FTC definition of a dark pattern. Regulated entities in Florida may not use dark patterns to encourage children to yield personal information or "forgo privacy protections," or for any other purpose the platform is aware of (or willfully disregards) that would "result in substantial harm or privacy risk to children." While Connecticut uses nearly identical language, California's law asserts slightly different language to create a "materially detrimental" standard. Defining dark patterns and their subsequent harms is not a new challenge for privacy regulation. However, as noted in an amicus brief signed by the Chamber of Progress, IP Justice and LGBT Tech Institute, when dark patterns are too broadly defined, the legislation risks limiting popular and benign features like recommendations or autoplay algorithms.
-
expand_more
Strong default privacy settings
Finally, youth privacy bills tend to focus on operationalizing strong default privacy settings if the user is within the protected age range or, sometimes, if their age is undetermined. The California AADC requires services accessed by children build in "by design and by default" protections against services that would profile children and offer detrimental material. These strong defaults are vaguely defined as "settings that offer a high level of privacy." Likewise, the AADC does not explicitly define what detrimental material is, nor what is materially detrimental. Maryland's proposed rule mirrors this exact language. More precise meanings for several of these critical terms will have to emerge through rulemaking. Beyond state-level laws, default settings continue to be a frequent battleground for youth privacy rights, as evidenced in the FTC’s recent suit and settlement with EPIC Games, which dealt with the "on-by-default" voice channels in EPIC's popular game, Fortnite.
Steps to analyzing a free speech case
In the U.S., the First Amendment enshrines free speech as a fundamental democratic right, with centuries of precedent upholding its strong protection. Now, First Amendment challenges are becoming the primary battleground for youth privacy laws. Privacy pros should be aware the outcome of these constitutional challenges determines whether laws like the AADC remain intact, are significantly weakened or are otherwise completely abandoned.
A court's basic approach to analyzing a First Amendment challenge requires a series of legal steps. At each step, the parties argue over the proper application of First Amendment jurisprudence to the regulation in question, in this case California's AADC.
-
expand_more
1. Is the government restricting speech?
The first question a judge asks about a First Amendment challenge is whether the conduct restricted by the challenged law counts as protected speech. Courts continually wrestle with determining what qualifies as speech in an online context.
In peeking at the merits of the case, Judge Freeman disposed of this question relatively quickly. As she explained, quoting Supreme Court precedent, "The question is whether the law at issue regulates expression ‘because of its message, its ideas, its subject matter, or its content." To answer this question, Freeman peeked at both the prohibitions and mandates of the AADC.
Importantly for First Amendment watchers, even the actions prohibited under the law, such as the "collection, sale, sharing, or retention of children’s personal information, including precise geolocation information, for profiling or other purposes" are deemed to be restrictions of speech. Freeman cited the 2011 Supreme Court case Sorrell v. IMS Health, which some privacy attorneys warn could set a precedent for invalidating data privacy laws on First Amendment grounds, depending on how it is applied. As Freeman explained, "[W]hether there is a general right to collect data is independent from the question of whether a law restricting the collection and sale of data regulates conduct or speech. Under Sorrell, the unequivocal answer to the latter question is that a law that—like the [AADC]—restricts the ‘availability and use’ of information by some speakers but not others, and for some purposes but not others, is a regulation of protected expression."
In looking at the actions mandated by the AADC, such as the DPIA requirement, the court again cited Sorrell. Freeman took issue with the law's application to only certain speakers, i.e., "only for-profit entities but not government or non-profit entities," and further concluded that the age estimation and privacy provisions "appear likely to impede the availability and use of information and accordingly to regulate speech."
-
expand_more
2. Is the regulation content-based or content-neutral?
If the law at issue is regulating speech, how much deference should the court give to the government? Different standards of judicial review apply depending on whether the regulation is content-based or content-neutral.
If a law targets speech "based on its communicative content," it is a content-based law that triggers an extremely skeptical review by courts known as "strict scrutiny." There are many exceptions to this rule because certain special category restrictions, covering content like commercial speech, fighting words, or obscenity, trigger other flavors of judicial review.
If the court determines strict scrutiny is not warranted, it will instead apply intermediate scrutiny. As a lower standard, intermediate scrutiny only requires the regulation to further an "important government interest" and that the means are "substantially related" to this interest. Often, fulfilling the second prong requires the regulation to leave "ample avenues for other communication" open.
Somewhat unexpectedly in the AADC case, Freeman applied a special type of intermediate scrutiny that governs "commercial speech." The analysis applied under this standard comes from the Central Hudson case. This slightly more elaborate standard essentially boils down to whether the government has a "substantial interest" that would be achieved through the speech restrictions at issue. From there, the government must show that these restrictions are not "more extensive than is necessary to serve that interest."
-
expand_more
3. Does the law advance an important or substantial government interest?
No one disputes the fact that "protecting the physical, mental, and emotional health and well-being of minors," as the state of California puts it in briefs, is an important government interest.
Given this context, though NetChoice argued the law does not describe a sufficiently concrete harm, the court recognized the evidence California provided in the case as sufficient. That is, children are harmed by design and privacy practices, and preventing this sort of harm is within the powers of the state. This is one area where California is unlikely to lose the case moving forward.
-
expand_more
4. Does restricting speech advance the goal of the law without going too far?
This is the part of the legal test that will likely determine the outcome of this case, as Freeman previews in her preliminary injunction.
Whether the court looks for the regulatory mechanisms to be "substantially related" to the government interest, as under intermediate scrutiny, or not be "more extensive than necessary to serve that interest," as under the commercial speech standard, this step of the analysis asks judges to look for a fit between the means and the ends of the law.
For the California AADC, Freeman's preliminary look at the case found the ends may not justify the means, not just for the most hotly debated parts of the law, but for every single provision she considered.
In reviewing the court's reasoning, excerpted below, it is important to keep in mind the primary purpose of this law is the safety of minors online, with privacy interests only secondarily invoked. For other laws, where the ends are targeted more specifically at privacy, the same type of analysis could result in different conclusions—because a fit between means and ends is important for First Amendment jurisprudence.
AADC requirementJudge Freeman's analysisAADC requirement:Mandatory DPIAs
Judge Freeman's analysis:"Because the DPIA report provisions do not require businesses to assess the potential harm of the design of digital products, services, and features, and also do not require actual mitigation of any identified risks, the State has not shown that these provisions will in fact alleviate the identified harms to a material degree."
AADC requirement:Mandatory age estimation
Judge Freeman's analysis:The AADC’s "age estimation provision appears not only unlikely to materially alleviate the harm of insufficient data and privacy protections for children, but actually likely to exacerbate the problem by inducing covered businesses to require consumers, including children, to divulge additional personal information."
AADC requirement:Treating all users with kids' protections
Judge Freeman's analysis:"If a business chooses not to estimate age but instead to apply broad privacy and data protections to all consumers, it appears that the inevitable effect will be to impermissibly reduce the adult population to reading only what is fit for children … Such an effect would likely be, at the very least, a 'substantially excessive' means of achieving greater data and privacy protections for children."
AADC requirement:High default privacy settings
Judge Freeman's analysis:"The provision here would serve to chill a 'substantially excessive' amount of protected speech to the extent that content providers wish to reach children but choose not to.
AADC requirement:Age-appropriate versions of privacy policies
Judge Freeman's analysis:"Even accepting that the manner in which websites present privacy information … constitutes a real harm to children’s well-being because it deters children from implementing higher privacy settings, the State has not shown that the [AADC’s] policy language provision would directly advance a solution to that harm."
AADC requirement:Mandatory internal enforcement of policies
Judge Freeman's analysis:"The lack of any attempt at tailoring the proposed solution to a specific harm suggests that the State here seeks to force covered businesses to exercise their editorial judgment in permitting or prohibiting content that may, for instance, violate a company’s published community standards."
AADC requirement:Prohibition on the knowingly harmful use of children’s data
Judge Freeman's analysis:"NetChoice has provided evidence that covered businesses might well bar all children from accessing their online services rather than undergo the burden of determining exactly what can be done with the personal information of each consumer under the age of 18."
AADC requirement:Prohibition on profiling children by default
Judge Freeman's analysis:The state argues the provision is narrowly tailored to "prohibit profiling by default when done solely for the benefit of businesses, but allows it … when in the best interest of children." But as amici point out, "what is in the best interest of children is not an objective standard but rather a contentious topic of political debate."
AADC requirement:Restrictions on collecting, selling, sharing and retaining children's data
Judge Freeman's analysis:"In seeking to prevent children from being exposed to 'harmful unsolicited content,' the Act would restrict neutral or beneficial content, rendering the restriction poorly tailored to the State’s goal of protecting children’s well-being."
AADC requirement:Unauthorized use of children’s data
Judge Freeman's analysis:"The State provides no evidence of a harm to children’s well-being from the use of personal information for multiple purposes."
AADC requirement:Prohibition of certain dark patterns that reduce privacy
Judge Freeman's analysis:"Many of the examples of dark patterns cited by the State’s experts—such as making it easier to sign up for a service than to cancel it or creating artificial scarcity by using a countdown timer, or sending users notifications to reengage with a game or auto-advancing users to the next level in a game—are not causally connected to an identified harm."
AADC requirement:Prohibition of dark patterns that harm children
Judge Freeman's analysis:"The Court is troubled by the 'has reason to know' language in the Act, given the lack of objective standard regarding what content is materially detrimental to a child’s well-being. And some content that might be considered harmful to one child may be neutral at worst to another. NetChoice has provided evidence that in the face of such uncertainties about the statute’s requirements, the statute may cause covered businesses to deny children access to their platforms or content."
Alternative legal theories that challenge kids’ safety and privacy laws
Even if higher courts disagree with Freeman’s analysis — whether in her application of the Sorrell case, her chosen level of judicial scrutiny or her conclusions about balancing free speech and youth safety interests — other legal arguments could cause future trouble for similar regulatory regimes.
-
expand_more
Could the law survive strict scrutiny?
NetChoice claimed the AADC warrants strict scrutiny, the highest level of judicial review a constitutional challenge can prompt. It does so on multiple grounds, including on the basis that the law is a content-based restriction of speech. In her preliminary injunction, Freeman determined intermediate scrutiny is more likely to apply and, since she sees a likelihood that the law would not survive this scrutiny, the need to analyze it under a more exacting standard is nullified.
Yet, it is helpful to understand the different standard at issue for future free-speech cases, and how this has been applied in the past. Surviving strict scrutiny requires two basic elements, each slightly stronger than the standards for intermediate scrutiny. The government must demonstrate that the law at issue fulfills a "compelling government interest" and is "narrowly tailored" to suit this interest.
The U.S. Supreme Court previously resolved the tension between child safety and free speech in favor of free speech, either completely invalidating legal restrictions or revoking them only in part. The Supreme Court found sections of the Communications Decency Act to be unconstitutional in Reno v. ACLU, noting "although the Government has an interest in protecting children from potentially harmful materials … the CDA pursues that interest by suppressing a large amount of speech that adults have a constitutional right to send and receive."
The First Amendment rights of minors also have a significant legal history that bears on the government's interest. Traditionally, First Amendment rights are the same for minors and adults when the government seeks to control their access to speech. However, in Erznoznik v. City of Jacksonville, the court found that "only in relatively narrow and well-defined circumstances may government bar public dissemination of protected materials to (minors)." Likewise, the court in Reno placed narrow limits on the content that could be regulated to protect minors while still preserving their rights to free speech via access to the internet. In a recent decision, the court openly acknowledged the conflict between guardian and minor rights, with U.S. Supreme Court Justice Samuel Alito noting a student is "subject to whatever restraints the student’s parents impose, but (enjoys) the same First Amendment protection against government regulation as all other members of the public.”
Similarly, in Ashcroft v. ACLU the Court found the "universal restrictions" in the Child Online Protection Act were overbroad, and less restrictive means of enforcement were available, completely invalidating the law.
There are exceptions, however. In U.S. v. American Library Association, the Children's Internet Protection Act survived strict scrutiny. The court found that requiring libraries to place filters to prevent minors from accessing obscene materials met a compelling government interest and was sufficiently narrowly tailored.
-
expand_more
Is age assurance a prior restraint of speech?
As immortalized in New York Times Co. v Sullivan, the state cannot restrict speech before its publication. This prohibition on prior restraint is another foundational First Amendment principle. Courts nearly universally disfavor government attempts to block or prevent speech before it even occurs. Much of the previous prior restraint litigation has involved more traditional journalistic outlets, like the New York Times. However, the debate has become more contentious as social media platforms and other online providers grapple with the magnitude of their speech forums.
NetChoice's filing claims the age assurance methods required by the California AADC are an unconstitutional prior restraint of speech. NetChoice claims requiring users' ages to be verified preemptively restricts content and effectively chills speech that is not age appropriate. The complaint alleges the law will not only limit content for adult audiences but will also "infringe on protections for anonymous speech and deter users from services deploying the invasive age-verification mechanisms." This type of limitation, they argue, is barred by the First Amendment as prior restraint, which includes "state action designed to deputize private actors to serve as censors by proxy."
Alternatively, others argue the mechanisms that add friction in the name of youth privacy can have drastic chilling effects on speech and, regardless of the type of restriction, the restriction is a form of prior restraint. As legal scholar Eric Goldman noted in his brief for the NetChoice case, online users are incredibly sensitive when sites have barriers of access. If there is a delay or "latency" when a user attempts to access a site, "it would drive many users away," Goldman claimed. A court may not agree — or it just might.
-
expand_more
Is the law void for vagueness?
Across NetChoice's filing and the supporting amicus briefs, advocates emphasized how the California AADC and other similar laws should be void for vagueness under the First Amendment. Specifically, NetChoice argued the restricted practices covered under the California AADC are too broad, and they "disallow a range of commonplace online speech" that may be otherwise benign. Some of these commonplace practices include providing user recommendations and sending automated email updates. NetChoice argued further that, without a clearer description of the law's key obligations for potentially affected providers, the attorney general is left with "virtually boundless discretion" for enforcement.
Generally, First Amendment challenges based on vagueness put forward a simple claim: far-reaching laws deter otherwise lawful speech by not being precise enough to provide adequate notice. In U.S. v. Williams, the Supreme Court describes an unconstitutionally vague law as one which either fails to provide fair notice or "is so standardless that it authorizes or encourages serious discriminatory enforcement."
Just as laws can limit the functionality of online providers through prior restraint, there are concerns that vague rules would over-censor content by restricting some of the functions or even the lawful content of a site. As the Court reflects in Butler v. Michigan, a regulation cannot "reduce the adult population (to) only what is fit for children." Much like how functions such as autoplay could be construed as "dark patterns" without more tailored or precise language, other basic content could be deterred by a vague requirement.
Legislation meant to serve a general protective purpose may unintentionally include speech that should not be restricted, based on the definition of harm. As the New York Times asserts in its brief, "The opportunity for abuse is obvious here." Especially in a patchwork system, regulators may vary on the degrees and definitions of what is potentially harmful. While the state may have a compelling interest in protecting children from the risks of being online, it must do so with precision and specificity to prevent the rule from being void for vagueness.
What remains after a successful constitutional challenge?
Are there valid and functional mechanisms for U.S. regulators to require age assurance and other youth privacy protections that avoid conflicts with the First Amendment?
The Supreme Court's Sorrell decision includes an important phrase that will likely continue to impact how data protection regulations are interpreted in the U.S.: "Privacy is a concept too integral to the person and a right too essential to freedom to allow its manipulation to support just those ideas the government prefers."
The First Amendment remains a powerful tool to dispute legislation within the U.S., and any law regarding speech must pass a high standard to be considered constitutionally sound. Federal laws on youth privacy both failed and passed on constitutional grounds previously. As organizations like NetChoice confront state-level attempts to regulate youth privacy, the courts are left to interpret the values of the First Amendment in the twenty-first century tech ecosystem.
Nevertheless, policymakers' attempts to address kids' safety and mental health issues online are not going to stop. As top-down regulations are delayed, building the lessons of these laws into privacy-by-design practices should still be top-of-mind for all privacy pros. Even if the government cannot mandate these practices, companies would do well to think about how to support young people who use their services in a way that respects their privacy, autonomy and healthy development.
Additional resources
-
expand_more
Children's privacy resources