There is an emerging global patchwork of new frameworks requiring digital platforms to impose age assurance or verification requirements, with many geared toward protecting children from online harms.
While there is a shared goal to create a safer digital ecosystem, there is no silver-bullet approach. Governments around the world are considering a range of factors and principles, including the types of personal data collected to determine age or identity, necessity and proportionality around collection, consent mechanisms and even the purpose for employing the solution.
Similar to how the network of U.S. comprehensive state privacy laws has placed a layer of legal variance and regulatory uncertainty onto interstate commerce, the growing patchwork of assurance and verification technology requirements differ based on jurisdiction.
Australia sets the bar
The passage of Australia's Online Safety Amendment in November 2024 may represent the greatest shift from the status quo in the age assurance and age verification tech paradigm. The new law bars children under age 16 from accessing social media platforms and requires platform operators to incorporate age verification systems into their services.
Recently, Australia's Prime Minister Anthony Albanese announced YouTube would be designated as a social media platform under the law and will also be required to ban children under 16, joining traditional social media platforms, such as Meta, TikTok and X included in the ban.
"There is no one perfect solution when it comes to keeping young Australians safer online — but the social media minimum age will make a significantly positive difference to their wellbeing," Minister for Communications Anika Wells said in a statement following Albanese's announcement.
"The rules are not a set and forget, they are a set and support," she continued. "There's a place for social media, but there's not a place for predatory algorithms targeting children."
In an episode of the IAPP Privacy Advisor Podcast earlier this year, Australia eSafety Commissioner Julie Inman Grant said the Online Safety Amendment came into existence through a "populist movement" over several years. She said parents would complain to government officials over all the different permissions they would have to engage with on various platforms to ensure their child did not consumer harmful content.
"The Prime Minister took a very decisive and bold view in saying, 'We are going to pass a law very quickly that will put the burden back on the platforms to prevent (children) under 16 from having a social media account,'" Inman Grant said. "This is one of the most complicated and novel pieces of legislation we've ever had to implement, and there are lots of dependencies and moving parts."
Inman Grant also touched on a pilot program sponsored by the Department of Infrastructure, Transport, Regional Development, Communications and the Arts where approximately 40 companies are testing age verification technologies under the new framework. As of 1 Aug. a final 10-volume report on the agency's Age Assurance Technology Trial has been submitted to the government.
The EU's approach
The European Union's approach to balancing children's online safety and age estimation requirements sheds greater light on the growing patchwork.
On the one hand, the European Commission is seeking alignment and commonality. It recently developed a blueprint for "user-friendly and privacy-preserving" age verification method that will soon be piloted in Denmark, France, Greece, Italy and Spain. The blueprint followed the Commission's new guidelines for "proportionate and appropriate measures" to protect children online under Article 28 of the Digital Services Act.
On a national level, member states are set to explore "digital majority" regulations that would ban children under a certain age from accessing social media and/or require platforms to implement age verification. According to Euractiv, France, Greece, Ireland and Spain have each pursued or are pursuing domestic policy that either requires a form of age assurance or social media bans for children.
During a keynote panel at the IAPP and the Berkman Klein Center for Internet and Society's Navigate: Digital Policy Leadership Retreat 2025 in June, Ireland's Coimisiún na Meán Director Jeremy Godfrey said the EU is unlikely to pursue as strict of age restrictions for platforms like Australia and the U.K. because EU policymakers believe blanket age restrictions conflict with the U.N.'s Convention on the Rights of the Child to seek information.
"Participating in social media helps develop children's creativity, their confidence, their planning, their agency, so there are some positives," Godfrey said. "So the aim has to be to make social media safe for children. It would be a huge issue of defeat to just give up, ban it, and leave it on the sideline so it just results in children using unregulated, ignorant platforms."
Godfrey said he views "zero-knowledge proof" and "low-friction proof" of age, such as what is being proposed for the EU's digital ID program, as tools that may alleviate the technical and financial concerns for companies worried about the added costs of implementing their own age verification systems. Under the program, an EU citizen's personally identifiable information would be stored in a secure virtual token that can be used to authenticate a user against whatever age restrictions may be in place for a given website or online service.
"If the platforms spent a fraction of the money on age verification that they have spent enabling their ability to modernize their data, this problem would've been solved," he said.
Shifting the burden of conducting age assurance onto the platforms themselves represents a change in how regulators view a technology company’s responsibility to their users, Godfrey added. If the costs associated with implementing the necessary technology to meet new age assurance requirements are too high, then that company should reconsider offering services to children altogether.
"If you can't have a safe and profitable service for children, then don't have a service for children," Godfrey said. "We need to ensure they have the resources and ingenuity so they can find ways of making their services both safe and healthy."
The UK's middle ground
As the EU sifts through its options, the U.K. is positioning itself to occupy a middle ground between Australia's age verification regime and what's being pursued among EU member states.
On 25 July, age verification requirements for the U.K. Online Safety Act entered into force. OSA requirements do not impose a specific form of age verification on platforms, with the Office of Communications listing facial age estimation, the uploading of open banking information, digital identity checks, credit card information checks, email-based age estimation, mobile network age check and photo identification matching all as acceptable forms of age verification under the law.
During a keynote presentation at the IAPP Navigate conference in Portsmouth, N.H., OfCom Executive Dame Melanie Dawes said the global push to implement age restrictions aimed at protecting children online was motivated, in part, by a sense of frustration over privacy concerns and downstream mental health issues that manifested from the digital activity of some children.
"The question becomes, when do you regulate? You regulate when you know that the incentives across the industry are just not going to deliver the outcomes for individuals or for society that will promise," Dawes said. "After 20 years of social media, we know now that those incentives are not in the right place, and that some form of regulation is needed."
Dawes indicated the government is pushing for an age verification system that is not based on a government-issued digital identification, but rather a system with requirements that vary proportionately to the type of service offered. She said as a regulator, it’s difficult at times to ensure children’s safety online because they often do not have a sense of where children are seeing problematic content because "there are very few companies that have effective age verification."
"I hope in five, 10 years' time, there are internationally-agreed standards on what age replication should look like, what content moderation should look like, and then countries can decide what they want their rules to be," Dawes said. "But there's some greater sense of expertise-led understanding of what standards can look like, and that's what I think we're driving at with age assurance in the U.K."
The U.K. is reluctant to pursue a digital ID system that is as comprehensive as the EU, according to Dawes, because it would be difficult to implement without first resolving privacy concerns associated with securing a tokenized system that stores sensitive personal information.
"We are absolutely clear that it must preserve privacy, otherwise it's not an OK system," Dawes said. "We are required to prioritize freedom of expression, and so, that throws us some challenges, but the technology is providing solutions that help us manage these tradeoffs we try to solve all the time."
Canada takes wait-and-see approach
In Canada, the Office of the Privacy Commissioner held a three-month public consultation from June to September last year, which generated six major themes that will shape the OPC's approach for creating subsequent age verification guidelines. The themes are differentiating between different forms and uses of age assurance technology; understanding the impacts of misuse; recognizing conducting age assurance is not the end goal, but creating a safer online experience for children; designating the party responsible for conducting age verification; taking extra precautions surrounding age estimation technology; and establishing that age assurance should be subject to risk assessments.
Canada has also, actively collaborated with the U.K. and EU member states to lay the groundwork to create an international standard on age assurance and verification, including signing onto the "Joint Statement on a Common International Approach to Age Assurance," last September.
"Age-assurance methods can raise privacy implications related to the collection of sensitive personal information," Canada's Privacy Commissioner Philippe Dufresne said in a statement to the IAPP. "They should be designed to be accurate and effective, while ensuring user privacy and with children’s best interests in mind. Age assurance technologies should be risk-based and proportionate, in other words, used when necessary and proportionate to the aim sought, and in compliance with privacy principles."
In terms of digital ID, the government has outlined its commitment to launching a program as part of its broader Digital Ambition 2024–25 strategy. The effort is led by the Treasury Board of Canada Secretariat, and "aims to modernize service delivery using secure, reliable, and privacy-respecting digital tools," according to the strategy.
Safety versus free speech in the US
The U.S. remains somewhat of an outlier with respect to age verification requirements as many children's online safety concerns are framed as a balancing act between ensuring their ability to browse the web safely and First Amendment concerns over potential free speech violations.
The June Supreme Court decision in Free Speech Coalition v. Paxton established a limited baseline precedent for allowing states to impose age-restriction requirements, Center for Democracy and Technology Free Expression Project Director Kate Ruane said during recent Congressional Internet Caucus Academy panel. The lawsuit involved the FSC, representing the adult entertainment industry, petitioning the court to overturn Texas House Bill 1181, which imposes age verification requirements on website operators if at least one-third of the content hosted on their sites is sexually explicit.
Texas ultimately prevailed in the case, and Ruane said use of virtual private networks within the state has skyrocketed to evade the new age restrictions for accessing adult content.
Ruane added that every single age verification system is "gameable," and those under consideration in various states could also exclude citizens who have a legal right to consume the targeted content, such as those without a government-issued ID or those who have been misidentified by a biometric system.
"Every system has an error rate. The question now is how much of an error rate can we tolerate to access constitutionally-protected speech? The answer used to be: Very little," she said. "Now, the answer may be different; at a minimum it includes whatever is in the Texas law, and we actually don’t know specifically what is a permitted age verification system in that instance … We need to be aware of these (errors) and build safeguards into statutes if we are going to pursue this path."
While federal legislative action has lagged in the U.S., multiple states have now passed or are drafting age-appropriate design codes with most drawing on some-to-most of the U.K. Age Appropriate Design Code for the requirements. Nebraska and Vermont are the latest to pass theirs, joining California and Maryland.
However, many of these codes and other state-level online safety laws are subject to litigation on First Amendment grounds.
NetChoice Litigation Center founder and co-Director Chris Marchese, speaking on the CICA panel, said his organization now has 20 active lawsuits against different states' technology regulations, while anticipating more to come.
"Lawmakers at the state level have been very creative in trying to get at content they don't like, whether it's through the addiction causes of action, age-appropriate design codes, age verification, age verification with parental consent," Marchese said. "How do you prove you're someone's parent without uploading some very sensitive documents? As of right now, technology doesn't exist that makes me confident that (age verification) can be done in a constitutionally-protected way."
Most recently, NetChoice asked the U.S. Court of Appeals for the 9th Circuit to maintain the current block on enforcing the California Age Appropriate Design Code after California Attorney General Rob Bonta asked the 9th Circuit to remove the block. California's most recent appeal in the case came after a U.S. District Court judge imposed a full injunction on the law in March upon getting the case back from the 9th Circuit, which ordered the lower court judge to conduct further analysis.
Will there be a federal approach?
Foundation for American Innovation Director of Technology Policy Luke Hogg said he supports the App Store Accountability Act sponsored by U.S. Sen. Mike Lee, R-Utah, that requires age verification to be undertaken by the app store providers and obtain consent from parents of minors to download apps.
"It is a horrible system to make every platform and every app do their own age verification," Hogg said. "If this is done at the device level, the operating system level, you just need to verify one time and you're done. All the nitty, gritty happens on the back end and that will improve the user experience."
Hogg said the longer the U.S. fails to take comprehensive action to pass technology regulations — though not exclusively limited to age verification and children's online safety — the more it risks having policy shaped by the EU, which has taken a more proactive approach to regulating technology. Though he said there are some areas in EU member states' tech policies surrounding age verification that could be replicated in future U.S. laws, such as a requirement in France that conducts verification through a "double-blind system."
"It's very concerning to what's coming out of the European Union, but I would caveat that there are some positive things that are happening when it comes to thinking about data protection, data privacy and the ways in which that is incorporated into age verification," Hogg said. "If the United States doesn't act, and this is about AI (and) technology generally, we are going to be forced into a box by the Brussels effect."
Alex LaCasse is a staff writer for the IAPP.