Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.

This week's joint report by a handful of Canada's privacy regulators into TikTok is a timely reminder that age assurance sits at the awkward intersection of two things we care about: protecting kids from harmful content and protecting everyone's privacy, including and especially children.

I won't dive into all the details of the report itself, but I do think it's a useful launchpad because it crystallizes a tension that's hard to ignore. Among their findings, the commissioners are asking the platform to do more to keep children from using it. TikTok will make some updates to its age assurance practices and the regulators will let them — and hopefully the rest of us — know what they actually expect in concrete terms.

Generally speaking, I think we can all agree that the old "Are you old enough?" button on some websites is theatre, not safety. Yet the tools that actually work can feel like they demand a lot of personal information up front.

The core challenge is simple to describe and annoying to solve: age assurance systems are meant to exclude children from spaces that aren't appropriate for them, but to do that well they can become magnets for sensitive data. If the solution to prove you're old enough for something is a scan of a passport, a face and, while you're at it, a copy of your soul, we haven't respected privacy by design; we've just relocated the risk.

I think the better direction might be to treat age as an attribute, not an identity. If the service only needs to know you're over or under a certain age, then it shouldn't need your name, address or anything else that can be reused to profile you. When systems minimize data, use on‑device checks where possible and delete artifacts immediately after the check, the privacy risk starts to look more proportionate to the problem we're trying to solve. That's not only good privacy practice, it's better engineering hygiene.

Of course, no single method fits every risk. Stronger risks justify stronger assurance, but proportionality should be the rule. Many regulators are increasingly explicit about this risk‑based approach.

In the U.K., for example, the Office of Communications has moved from euphemisms to expectations: services must deploy "highly effective" age checks to keep children away from pornography and other priority harms, and enforcement is no longer hypothetical. In the EU, the European Commission's guidance under the Digital Services Act pushes platforms to adopt protective defaults for minors and to use age assurance that is accurate, reliable and non‑intrusive.

Meanwhile, transparency is the social contract that makes any of this palatable. If a service is going to challenge you for age, it should say, in one screen and plain language, what method it's using, what data it touches, where the data goes and when it disappears. That includes what it won't do with the data — no repurposing for advertising, no building a shadow profile, no "we might use this later for research."

And the system must be fair in the ordinary, human sense of the word. People mis‑tap and type, families share devices, teenagers look like adults and adults look like teenagers. There needs to be a straightforward appeal path for false negatives, a way to proceed if you don't have a passport handy and accessibility for those without credit files or high‑spec phones. Otherwise, we've created a shiny new barrier that locks out the wrong people.

Independent oversight matters too. If an organization buys or builds an age assurance system, it shouldn't be offended when someone asks for proof it actually works and that it's safe to use. That means third‑party evaluation, documented false‑accept/false‑reject rates across different demographics and audited deletion practices. If this sounds like dull paperwork, that's because it is — but it's why it also builds trust. I think quietly proving a system does what it says is still the most underrated compliance strategy on the planet.

For those trying to operate across borders, the policy context is … varied. The U.K. has its Online Safety Act regime moving through codes and enforcement. The EU's DSA guidance is nudging platforms toward measurable protections and non‑intrusive age checks. In the U.S., several state‑level attempts to regulate age‑appropriate design have run into constitutional headwinds. California's law, for instance, has been repeatedly enjoined, so providers face a patchwork with real First Amendment tension.

In Canada, we could end up seeing federal reforms that would treat the personal information of minors as sensitive by default, tilting expectations toward stronger diligence and clearer explanations when kids are involved. No one is requiring perfection, but the general direction is clear: lighter‑touch checks for lower risk, robust checks where harm is acute and privacy protections throughout.

Where does this leave us? The TikTok report is one step toward setting expectations even if it doesn't settle the argument. The way through the dilemma isn't to choose between children's safety and privacy, it's to design age assurance so we get both, most of the time, with the least amount of data and the clearest possible rules.

Guardrails, guidance and a stable legal structure won't magically eliminate hard choices, but they will give organizations confidence that doing the right thing won't land them in hot water somewhere else. And if, like me, your instinct is to raise an eyebrow whenever a new widget asks for more data, that's healthy. (You know what I mean, because I don't actually have any eyebrows.)

Anyway, the trick is turning that healthy skepticism into practical design: show your work, keep what you collect to the bare minimum, delete it quickly and be prepared to prove it. If we can normalize those habits, maybe age assurance will finally grow up. I am really interested in seeing how the rest of this story, which actually began when my kids were young and now they're all grown up, unfolds.

Kris Klein, CIPP/C, CIPM, FIP, is the country leader, Canada, for the IAPP.

This article originally appeared in the Canada Dashboard Digest, a free weekly IAPP newsletter. Subscriptions to this and other IAPP newsletters can be found here.