Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.

In today's hyperconnected world, access to the digital environment has become an integral part of everyday life, including for children and teenagers. Social networks, entertainment platforms, games and educational, health and mobility services are central to how young people learn, communicate and interact. 

However, their growing digital presence has raised increasing privacy, safety and data protection concerns. Early and prolonged exposure to online content can bring serious risks, including contact with inappropriate material, pornography, cyberbullying, manipulation and abusive or misleading advertising.

To address these challenges, data protection authorities worldwide have intensified discussions on regulating the participation of minors in digital spaces. In Brazil, this debate was strengthened by the enactment of Federal Law No. 15.211/2025, known as the Digital Statute of the Child and Adolescent, which establishes comprehensive rules for protecting young people online, requiring providers of digital products and services intended for or accessible to children and teenagers to implement age-appropriate safeguards. 

ADVERTISEMENT

PLI,  Earn privacy CPE and CLE credits: Watch anytime online or on our mobile app, topics include AI, privacy, cybersecurity, and data law

Brazil's data protection authority, the Agência Nacional de Proteção de Dados, focused on protecting data belonging to children and teenagers in its regulatory agenda for 2025-26. Reinforcing this agenda, the ANPD published the 5th edition of "Radar Tecnológico," dedicated entirely to age verification mechanisms, analyzing emerging technologies and international best practices that can guide Brazil in implementing safe and effective age verification solutions in the digital environment.

The material explains that the age assurance attribute refers to the indication that an individual belongs to a specific age group. This can be determined through three main approaches: age estimation, age verification and age inference. The distinction among these methods lies in their level of accuracy and the type of information they rely on. 

But how can someone's age actually be verified in practice?

Looking at the global picture, the ANPD document maps out the evolution of age verification through five generations. Each generation tells a story of new technologies emerging, regulations tightening and society's expectations shifting. 

First generation (2000–10): 'Self-declaration'

The first generation of age verification mechanisms relied on user self-declaration. Users were typically asked to provide basic information — such as their birthdate — or to simply confirm they were over age 18, in Brazil. This approach was characterized by its simplicity, low cost, and minimal handling of sensitive data. 

However, because it offered no objective means of verification, it could be easily bypassed, making it largely ineffective at preventing minors from accessing restricted content. Regulators such as France's data protection authority, the Commission nationale de l'informatique et des libertés, and the U.K.'s Ofcom consider this method minimally intrusive but unreliable, and therefore appropriate only for low-risk online services.

Second generation (2010–18): 'Document verification and biometrics'

The second generation followed the evolution of smartphones and the increasing digitization of financial and public services. 

During this period, official documents — such as ID cards or passports — began to be captured digitally and supplemented with an additional verification step. Users were often asked to provide a selfie or a short video, which was then analyzed by facial recognition algorithms to match the biometric data with the photo on the document. 

This process enabled the detection of fraudulent documents and established a secure link between the declared identity and the individual requesting access, significantly reducing the risk of unauthorized third-party use.

Third generation (2018–22): 'Biometric and behavioral estimates or inferences'

Driven by advances in artificial intelligence, this generation introduced facial recognition, voice recognition and other biometric methods capable of estimating characteristics such as age based on physiological cues. 

Intelligent algorithms analyze facial patterns and automatically infer age ranges, eliminating the need for official credentials. This approach is considered more practical in regulatory contexts as it reduces the need to collect and store personal documents. However, its accuracy depends on data quality and the diversity of the dataset used to train the AI.

Fourth generation (2022–25): 'Tokens and cryptographic proofs'

The fourth generation consolidates the idea that it is not necessary to expose an individual's complete identity to verify a single attribute, such as age. Advanced cryptographic technologies allow age attributes to be verified without revealing personally identifiable information. 

In practice, minimal credentials, such as cryptographic tokens, are issued and used selectively and securely. Proof of age is generated by an entity independent from the service where it is presented, with a responsible party validating the attribute and issuing the token. Once issued, the token alone is sufficient to prove age within a service provider's system. 

This approach enables robust verification while protecting user privacy, as it avoids the exposure of additional metadata or personal information. 

Fifth generation (2025-onward): 'Testbeds and integration into the technology ecosystem'

The fifth generation marks a shift from asking which method to use, to exploring how to verify, integrate and audit the entire age verification ecosystem with evidence and strong data protection. The goal is to ensure this ecosystem functions interoperably with standardized protocols, allowing the use of various auditable methods — documents, biometrics and encrypted tokens. 

Ideally, systems and browsers would have age verification as a native mechanism, receiving only the attribute as evidence and not the raw data itself. Encrypted, short-lived tokens that carry temporary age attributes help minimize risks of tracking or unnecessary personal data exposure. Integration with sovereign digital identities, such as the EU Digital Identity Wallet and the gov.br platform in Brazil, would also be possible. 

To enable all this to function in a standardized manner, "test environments" or "testbeds" have been developed. These are used to evaluate applications with the above-mentioned characteristics according to standardized criteria relating to accuracy, impartiality, anti-fraud resilience and privacy protection. 

Age verification and personal data protection 

After addressing the question of how and which mechanisms can enable the verification of users' ages, the ANPD document turns to the challenge of reconciling age verification with the protection of children's and teenagers' personal data, drawing on best practices, studies and regulatory frameworks from other countries.

Across these international references, several points of convergence can be identified. These include the alignment of verification measures with the level of risk associated with data processing; the minimization of personal data collection; the promotion of user transparency and awareness; and the prohibition of profiling, surveillance or secondary uses of data. Additionally, they emphasize the protection of informational self-determination and the promotion of inclusion and accessibility in verification tools.

In addition to these factors, the principles that should guide the implementation of age verification mechanisms in digital environments also converge. Guidelines from institutions such as the CNIL and the European Commission indicate age verification should be approached in an integrated and balanced manner, reconciling the protection of children and teenagers with their freedom to navigate the digital environment. Thus, systems must be proportionate to the level of risk, sufficiently robust to ensure effectiveness, and at the same time simple and accessible to users.

Among these convergent principles, the most significant are privacy by design; proportionality, ensuring the verification level matches the associated risk; necessity and purpose limitation, which prevent the secondary use of data; risk management and security, which safeguard data integrity, confidentiality, and availability; and nondiscrimination, interoperability and technical robustness, which promote standardization, consistency and scalability of the mechanisms.

One of the most anticipated developments in this field is the international standard ISO/IEC FDIS 27566-1, which seeks to establish a framework for age verification methods applicable to digital services. The standard is based on the principle that the degree of verification rigor should be proportionate to the level of risk the service poses to young audiences. Accordingly, platforms offering sensitive content — such as violent games or open social networks — should adopt more stringent verification mechanisms, while lower-risk services may employ simpler and more proportional approaches.

Yet, despite this promising evolution, significant challenges remain. Achieving global standardization, managing high implementation costs, ensuring the reliability of digital wallets and attribute issuers, and preventing the exclusion of individuals without access to digital identity are all obstacles regulators and the industry must still address.

Without claiming to exhaust the subject, this is a current and expanding debate in which the role of privacy and data protection professionals will be essential to transform this challenge into an opportunity to promote responsible innovation.

Marina de Paula Souza Reis, CIPM, CDPO/BR, is a lawyer and Tiago Neves Furtado, CIPP/E, CIPM, CDPO/BR, FIP, is a partner at Opice Blum Advogados.