Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.
Lawmakers and regulators around the world are intensifying their focus on protecting children and teenagers online. Although the goal of protecting the privacy and safety of young people online is widely shared, approaches to achieving it vary significantly across jurisdictions.
A pivotal point of debate is whether online service providers should be explicitly required to implement age assurance mechanisms to estimate, verify or confirm users' ages, and then tailor users' online experiences accordingly.
United Kingdom
The primary regulatory requirements for age assurance in the U.K. are contained in the Age Appropriate Design Code 2021 and the Online Safety Act 2023.
Age assurance plays a significant role in keeping children and their personal information safe online. It describes approaches or tools that help estimate or assess a child's age and, therefore, allows services to be tailored to their needs or access to be restricted where required.
The U.K. AADC — the first-ever statutory code of practice for protecting children's data — mandates a risk-based approach to age assurance, requiring services to effectively apply standards to children and youth, and establish or estimate age with a level of certainty commensurate on risk. It is regulated by the U.K. Information Commissioner's Office and mandates 15 design standards to protect children's data.
The Online Safety Act, which is overseen by Ofcom, requires "highly effective" age assurance for services that display or publish pornographic or other harmful content ensuring children are not able to access it. The ICO and Ofcom have published extensive guidance on what constitutes effective age assurance and make clear that age assurance is meant to provide flexibility and to be tech neutral, reliable and fair.
Ofcom guidance suggests methods such as photo ID matching, facial age estimation, mobile network operator age checks, credit card checks, digital identity services and email-based age estimation as capable of being highly effective. Methods such as self-declaration of age are not considered effective.
European Union
Several EU member states have enacted laws explicitly requiring online service providers to implement age assurance mechanisms to protect minors from harmful or inappropriate content, typically defined to include pornographic, violent or otherwise developmentally inappropriate material.
For example, Ireland's Online Safety and Media Regulation Act 2022 requires covered video-sharing platform services to implement effective age assurance measures to prevent children from accessing "adult-only video content," which encompasses pornography and content depicting extreme or gratuitous violence.
Ireland's Online Safety Code stipulates that age assurance mechanisms must be robust and that self-declaration of age is insufficient. Acceptable methods include age verification and age estimation technologies that reliably determine a user's age without solely relying on user-provided information. The code also underscores the importance of ensuring these measures are designed with privacy safeguards appropriate to the sensitivity of the data and the potential risks to minors.
As another example, France's SREN law, to secure and regulate the digital space, seeks to promote digital safety by mandating age verification to access pornographic content and requiring parental consent for minors under 15 to create social media accounts.
Interestingly, France's data protection authority, the Commission nationale de l'informatique et des libertés, has issued detailed guidance emphasizing the need to balance youth protection with privacy and data minimization. It calls for age verification systems to be structured around principles such as proportionality, robustness and the use of independent third parties.
The CNIL discourages methods that involve direct identity collection by content publishers or reliance on biometric data unless specifically authorized by law. Instead, it favors privacy-preserving models — such as cryptographic proofs of age — that enable users to verify eligibility without disclosing their identity.
For high-risk contexts such as access to pornographic sites, the CNIL has advocated for a double-blind architecture, where one entity verifies age and another grants access, preventing either from linking identity to browsing behavior.
While acknowledging that current solutions remain imperfect and potentially circumventable, the CNIL encourages the development and certification of third-party systems that offer verifiable assurances without compromising user anonymity.
The CNIL's approach highlights the degree to which effective age assurance must be integrated with core data protection principles rather than treated as an exception to them.
United States
Numerous U.S. states have implemented, or are trying to implement, explicit age assurance requirements. One example is Texas' Securing Children Online Through Parental Empowerment Act, which requires covered digital service providers to collect and register the age of every individual attempting to create an account.
Although federal courts have enjoined enforcement of several substantive provisions of the SCOPE Act on constitutional grounds, including those related to content filtering, advertising restrictions and age verification for accessing certain materials, the age registration provision in Section 509.051 has not expressly been enjoined. As a result, while much of the statute is currently unenforceable, covered digital service providers may still be obligated to comply with the age registration requirement, unless and until a court rules otherwise.
Another example is Louisiana's Secure Online Child Interaction and Age Limitation Act, which would require covered social media companies to use commercially reasonable efforts to verify the age of Louisiana account holders with a level of certainty appropriate to the risk posed by their data practices. Although the law is scheduled to take effect 1 July, it is currently the subject of a constitutional challenge seeking to enjoin its enforcement.
Yet another example is California's Age-Appropriate Design Code Act, modeled after the U.K. AADC. This statute would have required covered businesses to estimate the ages of users and tailor their online experiences accordingly, or else treat all users as minors. However, a federal court enjoined the statute in its entirety on First Amendment grounds. The California Attorney General has appealed the decision.
The court's ruling reflects broader concerns in the U.S. about the potential chilling effects of age-gating requirements on free expression, as well as unresolved tensions between age assurance mandates and user privacy expectations.
Separately, some U.S. states may indirectly impose age assurance obligations on companies deemed to have willfully disregarded or failed to appropriately investigate users' ages. For example, the California Consumer Privacy Act imposes prescriptive requirements on knowingly selling certain minors' personal information or sharing it for cross-context behavioral advertising, and provides that a "business that willfully disregards the consumer's age shall be deemed to have had actual knowledge of the consumer's age."
As another example, Maryland's Age-Appropriate Design Code Act seeks to impose various data privacy requirements on controllers that "should have known" the user is under age 18. Although the law is scheduled to take effect 1 Oct., it is currently the subject of a constitutional challenge seeking to enjoin its enforcement.
These statutory frameworks highlight how age assurance can serve as a trigger for broader privacy obligations, particularly where minors' personal data is involved.
A patchwork of additional laws in around 20 states targets online access to obscene material by minors and requires covered operators to verify the ages of users before permitting them access. The constitutionality of some of these laws is now before the U.S. Supreme Court, which is expected to clarify the permissible scope of state-mandated age verification under the First Amendment.
The outcomes of these constitutional challenges may further define the boundaries of permissible age assurance requirements under more general U.S. online privacy and safety laws. Until those decisions are issued, online service providers operating nationally face legal uncertainty not only about what mechanisms are permissible, but also how to calibrate them to minimize unnecessary data collection.
Canada
Canada has not yet adopted statutory age assurance requirements for online service providers. However, the Office of the Privacy Commissioner of Canada has taken an active role in shaping the policy conversation, most recently through a 2024 exploratory consultation on age assurance.
In that document, the OPC recognizes that age assurance can support child safety by enabling more tailored protections or limiting access to harmful content, but stresses any such measures must be developed with strong privacy safeguards. The OPC discourages broad deployment of identity verification systems for general-use services, warning that such practices risk normalizing intrusive data collection.
Instead, the OPC urges organizations to consider alternatives such as applying child-appropriate protections to all users or empowering parental controls at the device level. The consultation also highlights risks related to data minimization, function creep and unintended exclusion, particularly for youth without access to government-issued ID or for whom biometric systems may be less accurate.
The OPC's guidance reflects a cautious stance: any move toward age assurance must be both demonstrably necessary and rigorously protective of users' privacy and dignity. While Canada does not currently require online service providers to implement age verification or estimation measures, the OPC has signaled that it may issue further guidance, and encourages organizations to take a proportionate, privacy-by-design approach where age assurance is contemplated.
Outlook
Across jurisdictions, age assurance remains a fast-evolving and contested area of regulation, but a common theme is emerging — organizations are expected to assess risks contextually and calibrate their practices accordingly.
Services that may expose minors to elevated safety or privacy harms are increasingly expected to demonstrate that they have implemented meaningful safeguards, including age assurance mechanisms that are proportionate to those risks. At the same time, regulators are signaling that indiscriminate or overly invasive age checks — particularly those involving biometric or identity data — may create new privacy and equity concerns of their own.
Online service providers must therefore balance competing imperatives: protecting young users, complying with divergent legal frameworks, and upholding privacy-by-design principles.
Organizations grappling with these issues should be prepared to revisit their age assurance strategies, particularly considering mounting enforcement activity and the prospect of new technical standards emerging in the months ahead.
Those responsible for trust and safety, product or privacy functions will need to navigate not only the legal complexity but also the practical and ethical tradeoffs posed by different age assurance approaches, particularly as enforcement increases and international standards begin to take shape.
Jonathan Tam, CIPP/C, CIPP/US, is a tech and privacy partner in Baker McKenzie's San Francisco office and chair of the San Francisco Bar Association's Cybersecurity & Privacy Law Section. Elizabeth Denham CBE is international advisor to Baker McKenzies's data and technology practice, 5 Rights trustee, and chair of the Jersey Data Protection Authority.