As children go online at younger ages and across multiple services and devices, age assurance — an overarching term that refers to the practices used by services to assess or estimate a user's likely age — has emerged as a way to balance the benefits of digital participation with protection from online harms. Within this framework, age assurance can range from lower-friction measures, such as self-declaration or behavioral analysis, to more robust forms of age verification that rely on third-party documentation or government-issued IDs. Despite this range of approaches, effectively enforcing age-based access rules remains a persistent challenge.
While most platforms set minimum age requirements, enforcement has often been inconsistent or easy to bypass, and the rise of VPN usage further complicates compliance. Even when age assurance mechanisms function as intended, the reasons behind their use can overlook important differences in children's developmental needs, applying a uniform standard that may inadvertently restrict some users or fail to protect others. Questions about who should own and operate age assurance mechanisms have become increasingly complex and foster questions among policymakers and industry stakeholders about exposure to harmful content and implications for privacy and autonomy.
Responsibility for implementing and operationalizing these tools now rests with device manufacturers, app developers, operating systems and third-party providers: actors central to discussions on protecting the privacy of children online. Each ownership model brings distinct benefits and limitations. Device-level solutions can provide consistent age controls across multiple applications, but may not fully account for children's developmental stages or the context of specific content. App-based systems, on the other hand, can offer more tailored experiences for different age groups, yet often face interoperability challenges between apps and rely on accurate self-reporting and consistent age-appeal assessments.
In parallel, some jurisdictions are moving toward mandatory user registration as a prerequisite for accessing online services, recognizing that most age assurance models rely on identifiable user data to function effectively. These debates go beyond technology, raising important questions about children's privacy, consent, online safety, digital autonomy, the developmental and neurocognitive needs of neurodiverse and LGBTQ+ youth, and highlighting the need for coordinated, multistakeholder approaches.
Implementing age assurance
There are a variety of ways that age assurance can be implemented, including the development of in-house systems, the use of third-party providers, the involvement of trusted adults to verify a young user’s age, and relying upon age assurance performed by other providers at the device, operating system, browser and platform levels. A framework for age assurance systems set to be published as an international standard ISO/IEC 27566-1, sets out the different means of age assurance and considerations for privacy and security.
Third-party ecosystem
As regulatory requirements expand globally, third-party providers play a growing role in the implementation of age assurance mechanisms across devices, apps and websites. Rather than building these tools in-house, platforms increasingly rely on external services that offer scalable and interoperable solutions. These providers develop and maintain a wide range of techniques, ranging from parental verification and tokenized or credential-based assertions to risk-based behavioral assessments and age estimation tools, designed to help platforms meet compliance obligations while supporting safer digital experiences for children.
In recent years, there has been a shift from traditional age verification tools that check a user's details against a trusted database, toward age estimation technologies, which predict a person's age range using biometric markers. The effectiveness of these technologies in practice is uncertain: younger users may lack access to verifiable credentials, self-reporting is often unreliable and facial age estimation algorithms are less accurate for some demographics. Additionally, behavioral signals can be imprecise or context dependent. They can overlook developmental variations among neurodiverse users, creating risks of under-and over-blocking that regulators and industry must continue to address.
Reliance on external providers also introduces important considerations around privacy, consent and accountability. Sensitive data, including age, may be collected as part of an age appeal process, such as a trusted adults' personal information used to authenticate the identity of a young user. Such information, including behavioral indicators of age, must be processed securely and in line with principles of consent, proportionality and data minimization. As standards evolve, third-party solutions should be evaluated not only on effectiveness but also on their ability to enable privacy-preserving, age-appropriate digital experiences: underscoring the continuing tension between protection, usability and trust.
Device level
Age verification at the device level involves verifying a user's age once through the device's operating system. OS-level proposals often attempt to store a user’s age status locally, and when a user attempts to access an age-restricted app or website, the device's OS shares a secure, privacy-preserving digital signal or age token, often via an application programming interface, with the platform. This signal indicates whether the user is above a certain legal age threshold, often either over age 18 or under age 13, without repeatedly disclosing their identity or birthdate to every service, ideally minimizing data exposure.
OS level
For age verification offerings at the operating system level, major providers like Apple and Google would integrate age verification mechanisms directly into the system setup or at the point when a user creates their own profile. These mechanisms may include demanding proof of age during device activation or account setup, often by checking an official government-issued ID, utilizing facial age estimation technology from a third-party provider, or relying on self-attestations.
Once verified, similar to device authorization protocols, the OS would be responsible for securely storing this age status and providing the secure, real-time API that allows third-party platforms to query the age status in a privacy-respecting manner, rather than requiring the user to re-verify for every service. For example, California recently passed AB 1042, requiring device-makers like Apple and Google to collect users' ages and make this data available to apps. Similar to device level authorization, OS-level verification measures often force users, of all ages, to unmask — essentially overriding the possibility of anonymous use and access between oneself and an OS.
Browser level authorization
At the browser level, age verification is generally less about a persistent, pre-verified status and more about facilitating on-demand verification for the websites visited. In these offering, browsers are expected to support third-party age verification services via plug-ins or extensions, by recognizing and processing age tokens or digital IDs previously established by a trusted party — like an OS or a dedicated identity provider. Some advanced, privacy-focused methods involve the use of zero-knowledge proofs, where the browser and an identity service can cryptographically prove to a website the user is the correct age without revealing the user's birthdate or any other personal data. Browser storage or cookies may also be used to "remember" if a user has completed an age check for a particular site to prevent repeated prompts in a single session.
Platform level
Platform-level age verification is the most common and traditional approach, where the website, social media service or app itself is responsible for confirming the user's age. This often starts with a low-friction age gate that requires the user to self-declare their birthdate. To mitigate the risk of young users intentionally altering their age, opportunities to change a birthdate at later stages are limited. For age-restricted content or services, platform-level age-gating may escalate to more detailed verification methods, such as a user uploading a photo of a government-issued ID document, which is then analyzed for authenticity, facial analysis to estimate age from a selfie, or cross-referencing user data, like name and address, against official databases.
Many platforms also offer parental consent mechanisms to verify the age of a child's parent/guardian and obtain verifiable permission for the minor to use the service. Of course, platform-level verification rests on the notion that all trusted adults, such as parents/guardians, are able and willing to serve as the arbiter for their child or teen's online experience, and that doing so does not itself pose a fundamental privacy risk or harm. If parents do not already use the platform when they enable parental controls on a young user's account, they may be required to create their own account, thereby registering and providing their personal information as part of the process.
Moving forward
Policymakers and practitioners alike must acknowledge and grapple with the tradeoffs inherent to age assurance. The actual implementation of age assurance — spanning devices, OS, browsers and platforms — is a complex balancing act and this multilayered endeavor must respect the online protection, privacy, and digital autonomy of children, teens, and adults.
Disclosure: The opinions expressed in this article are the author's own, and do not represent the position or opinions of their employers.
Katelyn Ringrose, CIPP/E, CIPP/US, CIPM, FIP, is privacy and security attorney at McDermott Will & Schulte.
David Sullivan is the executive director at Digital Trust and Safety Partnership.
Basia Walczak and Melanie Selvadurai, CIPP/C, CIPM, contributed to this article.
