The Australian Government's Online Safety Amendment (Social Media Minimum Age) Act 2024, which enters into force 10 Dec. 2025, establishes a minimum age for social media use and requires social media providers to take reasonable steps to prevent underage users from having an account on their platform. Once the amendment takes effect, parents will no longer be able to give their consent to allow their children under 16 to use certain online platforms such as Facebook, Snapchat, Instagram and TikTok. About a week before the law took effect, Meta began shutting down Facebook and Instagram accounts belonging to children in Australia under the age of 16.
Rooted in the notion that "protections put in place by platforms have not kept pace with the harms," Australia's amended law comes into effect as issues around children's online privacy and safety have become higher priority for law and policymakers globally. In the EU, discussions on how to impose stricter accountability regimes onto online platforms have grown louder following passage of the Digital Services Act (Regulation (EU) 2022/2065).
At the commencement of enforcement proceedings under the DSA against TikTok in April 2024, former EU Commissioner for the Internal Market, Thierry Breton, described its reward-to-watch feature as "toxic and addictive as cigarettes." And in November 2025, Members of the European Parliament adopted a non-legislative report calling for a minimum age of 16 to access social media platforms as well as the institution of bans on engagement-based recommender algorithms and gambling-like features to protect minors. During the debate, Rapporteur Christel Schaldemose remarked that, "We are saying clearly to platforms: your services are not designed for children. And the experiment ends here."
While the underlying policy objective of Australia's amendment is to protect children from cyberbullying, prevent online harassment and make online environments safer for children, organizations like UNICEF have argued the "real fix should be improving social media safety, not just delaying access." In an email to caregivers of underage users, YouTube pointed out that "parental controls only work when your pre-teen or teen is signed in, so the settings you've chosen will no longer apply" if the underage users view content on the website without signing into their profile. Some teen influencers said Australia's minimum age for social media accounts has "taken away their purpose" while other Australian teens reported an increased fear of isolation.
Although Australia is the first country to enact such a prohibition on social media accounts for minors, proposals to address online platform harms to minors have been part of law and policymaking discussions for years. Proposals to protect children's information online have envisioned various degrees of parental oversight and included a range of restrictions on and requirements for online platforms.
For example, the most recent iteration of the H.R. 6484 Kids Online Safety Act proposed in the U.S. Congress would require platforms to take reasonable steps to address specific harms to minors — such as threats of physical violence and sexual exploitation and abuse. These include things like providing safeguards that by default limit the ability of other users to communicate with a minor, limiting addictive design features and including parental tools that allow restriction of purchases and the monitoring and restriction of time spent on a platform.
There are several distinct concerns around the online safety of children, which can be grouped around at least three relatively discrete types of harms: content exposure, information privacy and design, e.g., dark patterns. In terms of the harms of content exposure, the issue concerns the availability of content on social media that is violent, sexually explicit or otherwise inappropriate, i.e., non-beneficial and/or harmful, for children to be exposed to.
When it comes to information privacy-based harms, these involve how information about child social media users becomes publicly available and the extent to which that information can be shared, accessed or misused to harm them. These include things such as targeted harassment, cyberbullying, stalking, threats of violence, sexual predation and financial exploitation. Lastly, design-based harms are platform features like infinite scroll, autoplay, notifications and other addictive elements that keep children using apps to the detriment of their sleep patterns, need for cognitive rest, or the opportunity cost of more meaningful social interaction or personal enrichment. These harms were perhaps encapsulated best by the CEO of a popular streaming service who once explained that, "We're competing with sleep, on the margin."
When the online safety of children is looked at through the lenses of these discrete harms, a blanket prohibition for minors seems like a hammer that sees all these problems as nails. Such a blanket prohibition or a ban can miss the different underlying problems, each of which requires a targeted solution. A study by the Center for Democracy and Technology on parents' and teens' perspectives about specific legal and regulatory restrictions looked at their perceptions of limitations in four key areas that make up the pillars of most policy proposals: age-verification, algorithmic feed controls, screen time features, and parental access. The report demonstrated the importance of "flexibility, transparency, and respect for family dynamics and individual needs" across these main pillars and argued that "[r]igid, one-size-fits-all policies therefore risk creating resistance, workarounds, or harm."
As research continues to shine a brighter light on the acute harms of screentime and the risk of technology addiction in children and teens, modern-day parents are more cognizant of these issues than ever. In an era marked by gentle parenting, conversations between parents and children focus on regarding healthy boundaries, validating emotions, and promoting positive discipline rather than relying on punishments and time-outs. Grappling with their own technology addiction, today's parents strive to be show up for their children while holding onto the belief that there is a fundamental good inside their kids that is worth protecting from a world increasingly dominated by Big Tech and AI.
Indeed, most people today must answer privacy and technology questions for themselves — and, as parents, for their children: How much screentime during a day is ideal? What should that screentime be spent on? Should children own smartphones, and, if so, at what age? What should be the role of video games and, most importantly, the use of AI tools? Whichever style of parenting one adopts, each of these questions directly intersects with the laws being enacted and the policies being rolled out not only in the online world, but in schools, places of work and business and in our homes and neighborhoods.
The increasing use of AI tools in schools adds an additional layer of questions and heightened risks, i.e., data breaches, deepfakes, etc., at a critical time when children are still learning how to form healthy relationships. Another CDT study found that a significant number of students report that they or a friend use AI chatbots to get advice on relationships (43%), receive mental health support (42%), serve as a friend or companion (42%), or engage in a romantic relationship (19%).
Imposing screentime limits and reducing technology distractions in school may be good steps to increase academic focus and enable deeper, more meaningful social relationships. But it is also worth looking into what lies beneath the addictive behavior or reliance on AI for advice about sensitive issues or as a stand-in for social companionship. Parents and educators have begun to do this and are coalescing around common goals, which many lawmakers and regulators are trying to tackle, to ensure that children do not develop dependencies and addiction to these tools and instead understand their limited use and value. Artificial intelligence is great at information processing, but it does not have the crucial capacity of caring, as is found in every great parent, educator and counselor. Artificial intelligence should, at most, provide support — and not act as a stand in — for the formation of social bonds and creativity.
But, ultimately, since no one has the perfect answer to questions around children's online safety, privacy and AI literacy need to be central parts of these conversations. Educating parents and children on screen use, technology adoption, and, most importantly, information privacy should take primacy. This can begin with having age-appropriate conversations with young children about the reasons for using security cameras in stores, the importance of having passcodes on our phones, and the risks and benefits of having Internet of Things devices in the home. These can progress as children get older to discussions about what is appropriate and useful to post on social media, who they are speaking to during playing a multiplayer online video game, or whether to use their own photo or an avatar. Talking to children about the devices we use/see, what these technologies are doing and why and how we are using them, is a great way to spark awareness and curiosity about information privacy. And creating a safe home environment where they can trust and connect with their families to talk about their online experiences and confrontations with technology is essential.
Whether one likes it or not, children are growing up in an era dominated by Big Tech: smartphones, smart glasses and smartwatches abound. All at once, technology is becoming more invisible and more ubiquitous, and privacy risks are always on the rise.
Privacy is one of the primary enablers of self-development, self-determination, relational ties, and the ability to "develop critical perspectives on the world," in the words of Julie Cohen, Mark Claster Mamolen professor of law and technology at Georgetown Law. Privacy empowers individuals by giving them the personal space where they are free from things that may hold them back from reaching their full potential. These are the things which are impossible in a world without privacy — the very things that all parents ultimately want for their children.
Müge Fazlioglu, CIPP/E, CIPP/US, is the principal researcher, privacy law and policy.
