The digital revolution unleashed many good things on our world, but like most human inventions, digital products have been a mixed blessing. While we now have access to information, communication and shopping services that would have been the stuff of science fiction several decades ago, we must also reckon with the dark lining of the internet's silver cloud. These new problems include misinformation and disinformation, cyber-abuse, nonconsensual pornography, addictive technology products, and an unprecedented mental health crisis among children and young people.
To be sure, this mental health crisis has been so well-documented by mental health professionals as to be undeniable.
Social media use by young people also seems to be a contributor. Recent publications from the psychological community, including a report from the American Psychological Association and a concurring public advisory notice by U.S. Surgeon General Dr. Vivek Murthy, have shown social media usage carries adverse physical and mental health consequences, with children being particularly vulnerable to such effects.
Former Facebook product manager Frances Haugen's 2021 whistleblowing revealed the company's focus on increasing engagement with its underage customers, despite its own research showing these underage users are subjected to substantial harm on their platforms. In her testimony before the Senate, Haugen noted the harms include cyberbullying, health and body image issues, and sexual abuse.
The advent of the mental health crisis among young people, coupled with platforms' seeming indifference to the problem has led to bipartisan efforts to tackle the effects of addictive digital services driven by engagement and profit on the mental health of young people.
In April, a bipartisan group of four U.S. senators introduced the Protecting Kids on Social Media Act. The proposed law, one of a series under consideration at the federal and state levels, would place restrictions on social media platforms to limit their use and engagement with underage Americans, with the goal of protecting children from the harms of these environments. The act defines social media as an app or website that "allows users to create accounts to publish or distribute" media content "to the public or other users," with a broad range of exceptions for email, texting, shopping, videoconferencing, news, gaming, subscription-based content, educational use and many others. These exceptions limit the scope of the act to focus squarely on account-based communication platforms and their customers.
The PKOSMA would make changes to how minors can use social media platforms, and in turn, how social media companies can interact with their underage customers. Yet we understand there have been rumblings that the PKOSMA is somehow a threat to free speech, along the lines of an industry group lawsuit challenging California's Age Appropriate Design Code.
When looked at critically, we find such First Amendment challenges to the act to be unpersuasive.
It is radically different from the censorious Communications Decency Act of 1996, which sought to cull the level of discourse on the open internet down to one appropriate for children, and which was correctly struck down by the unanimous case Supreme Court in Reno v. ACLU in 1997. The key issue was the law created broad content-based restrictions on speech that treated the entire internet community like they were children. Critically, the PKOMSA treats only children like they are children.
Instead of censoring the protected expression present on these platforms, the act takes aim at the procedures and permissions that determine the time, place and manner of speech for underage consumers. The act would still allow young people to read and view social media content freely, but it would place reasonable restrictions on their ability to post and engage with content at all ages, and it would allow children to view content without logging into an account.
The act would set two critical age-gates on young customers of social media platforms. First, customers under 13 would be banned from making social media accounts outright. Many of the most popular platforms, such as Twitter, Facebook, Snapchat and Instagram, already have internal rules banning preteens from having accounts, demonstrating that the act reflects current business practices. In the same way a crowded bar or nightclub is no place for a child on their own, this rule would set a reasonable minimum age and maturity limitation for social media customers.
Second, the PKOSMA would require those aged 13-18 to obtain verifiable parental consent before registering a social media account. This provision is very similar to the existing U.S. Children's Online Privacy Protection Act and its restriction on collecting personal information of consumers aged 13 and under without first obtaining affirmative parental consent. Like the COPPA, the PKOSMA would impose a reasonable prerequisite of obtaining parental consent to force social media companies to take necessary steps to protect vulnerable children from the harms of their products. Such rules empower parents to actively raise their children as they see fit, rather than choking free expression rights.
There is long-standing precedent for the states' right to protect the welfare of children by regulating the availability of products, including products implicating free expression. Consider Ginsberg v. New York (1968), which concerned the sale of pornography to minors. There, the Supreme Court found "the State has an interest 'to protect the welfare of children' and to see that they are 'safeguarded from abuses' which might prevent their 'growth into free and independent well developed men and citizens.'"
Certainly such justifications have their limits, lest they be used as a back door for actual censorship like in the Communications Decency Act, but the PKOMSA is far from the censorious CDA in its intent and effects.
It's therefore critical to note the PKOSMA does not create a restriction on the content of speech, but on its time, place and manner. The act does not restrict the actual things minors post and share on social media platforms are posting. Instead, it limits who may post, in certain places, at certain times, creating an important distinction from other recent child-related First Amendment cases.
It is true courts have struck down bans enacted in the name of child protection, in the process permitting acts such as selling violent video games to children in Brown v. EMA, 2011, posting obscene online messages and materials children might view in Reno v. ACLU, 1997, or commercially hosting online pornography in general in Ashcroft v. ACLU, 2004. But unlike the laws in these decisions, the PKOSMA does not broadly prevent expression or any particular speech contents. Because it leaves many nonsocial-media channels of communication, including those listed as exceptions, unmediated group chats and all offline platforms open for underage speakers to use, in addition to its furtherance of the important objective of their protection, it is likely to survive First Amendment scrutiny, like COPPA and unlike the content-based cases we have discussed.
In sum, such First Amendment concerns about restricting children’s access to social media are unwarranted. Left to their own devices, social media platforms will continue to target vulnerable children regardless of the harm they cause to physical and mental health. It is well-settled that children cannot get tattoos, drive cars or drink alcohol, nor can they do many other things that are constitutionally protected for adults without their parents’ consent.
The PKOSMA would treat social media the same way the law treats other dangerous environments where children do not belong, while leaving them the freedom to use the internet other ways that more safely suit their interests and intellects. Reasonable minds can certainly differ on the PKOSMA as policy, but not on its constitutionality.