Mind the gap: Understanding age verification and assurance


Contributors:
Katelyn Ringrose
CIPP/E, CIPP/US, CIPM, FIP
Privacy and Cybersecurity Senior Associate
McDermott Will & Schulte
David Sullivan
Executive Director
Digital Trust and Safety Partnership
Basia Walczak
Privacy and Product Counsel
Axiom - Airbnb Secondee
Melanie Selvadurai
CIPP/C, CIPM
Privacy program manager
TikTok
As children go online at younger ages and across multiple services and devices, age assurance — an overarching term that refers to the practices used by services to assess or estimate a user's likely age — has emerged as a way to balance the benefits of digital participation with protection from online harms. Within this framework, age assurance can range from lower-friction measures, such as self-declaration or behavioral analysis, to more robust forms of age verification that rely on third-party documentation or government-issued IDs. Despite this range of approaches, effectively enforcing age-based access rules remains a persistent challenge.
While most platforms set minimum age requirements, enforcement has often been inconsistent or easy to bypass, and the rise of VPN usage further complicates compliance. Even when age assurance mechanisms function as intended, the reasons behind their use can overlook important differences in children's developmental needs, applying a uniform standard that may inadvertently restrict some users or fail to protect others. Questions about who should own and operate age assurance mechanisms have become increasingly complex and foster questions among policymakers and industry stakeholders about exposure to harmful content and implications for privacy and autonomy.
Contributors:
Katelyn Ringrose
CIPP/E, CIPP/US, CIPM, FIP
Privacy and Cybersecurity Senior Associate
McDermott Will & Schulte
David Sullivan
Executive Director
Digital Trust and Safety Partnership
Basia Walczak
Privacy and Product Counsel
Axiom - Airbnb Secondee
Melanie Selvadurai
CIPP/C, CIPM
Privacy program manager
TikTok