In the 18 months since the U.K. Children's Code came into force, its influence has been felt not only among organizations working to adapt services in the U.K., but around the world, and the risk-based approach to recognizing the age of individual users at its core has been key in leading the evolution of age verification.
On the keynote stage at the IAPP Data Protection Intensive: UK 2023, U.K. Information Commissioner John Edwards said the code has been a highlight of the organization's work over the past year, specifically pointing to tools, events and compliance guidance for game designers, which states players' ages and needs at varying stages of development "should be at the heart" of design.
Age-appropriate standards are foundational to the U.K. code, and those modeled after it in the U.S. — from California's Age-Appropriate Design Code to similar regulations proposed in states including Connecticut, Maryland, Minnesota, New Jersey, Oregon and Texas.
"You need to know who a child is in order to apply the code to a child and to protect their data, and age-assurance, in this case, also should be used to minimize risk for children," Information Commissioner's Office Head of Regulatory Strategy Michael Murray said at DPI: UK.
The U.K. Online Safety Bill is also anticipated to come into force by the end of the year, which would implement a duty of care to keep children safe online. Among other provisions, the bill contains risk-based requirements to assess whether a service is likely to be accessed by children.
Research by Ofcom found a majority of children under 13 had their own profile on at least one social media application or site and one-third of children between the ages of 8 and 17 with a social media profile signed up with a false birthdate to make themselves appear older than they actually are.
"So what we need to do is find solutions that put the emphasis back on the service to design in a way that is age appropriate and to know who their users are," Murray said. "What we want to see is online services take a realistic, accurate, efficient and accountable view on age assurance. If they are processing kids' data, that's what they need to do to explain that processing and do it in a way that's not about the organizational corporate risk, but about the actual data protection and potential harms and using that risk assessment to determine the correct form of age assurance."
The evolution of age verification
Julie Dawson, director of regulatory and policy at digital identity company Yoti, said there is a "growing ecosystem" around age-verification methods, which range from identification documents, like passports and driver's licenses, to identity applications that verify age through an attached document and user image.
"As things have evolved, other methods have been looked at, such as the ownership of an email account that might have been an academic email account with certain years that account existed," she said. "Social media platforms have developed other techniques like looking at photos and the number of candles on a cake or the ages of other people you are associating with."
Artificial intelligence is increasingly being used for facial age estimation, Dawson said. Trained on millions of facial images, the technology conducts a pixel-level facial analysis to produce an individual's age.
"So, when it sees a new face, it does that analysis and just gives out the age. It's never uniquely identifying or authenticating an individual," Dawson said.
However, privacy advocacy groups have said age verification raises privacy and civil rights concerns. In particular, Big Brother Watch said using facial recognition software to estimate an individual’s age "inherently puts people’s privacy at risk."
"Social media sites already collect vast troves of deeply personal data," Advocacy Manager Mark Johnson said. "They should not be encouraged or compelled to collect even more through digital identity checks."
Gathering only data needed to verify age is a core principle digital rights organization 5Rights Foundation wants to see in age-verification methods. Head of Accountability Duncan McCann said the organization is working on creating a set of standards that would ensure methods are privacy-preserving, secure and inclusive.
"There’s a massive role for standards here," McCann said, and age-verification methods are "not an end in and of itself," but "only the means to provide an age-appropriate service."
"If all the service is doing is logging the age of the user, then they haven't really met either the requirements (of the U.K. Children's Code), nor are they actually really providing an age-appropriate service," he said.
Among key concerns for 5Rights, McCann said, is the accuracy of age-verification technologies, the ability of children to work around systems, and using age-verification purely to keep children away from services they may want to access.
"The more it becomes purely a negative, purely a keeping out, the more we have to go to age assurance systems that are hyper-surveillant and potentially problematic. You have to create these extra checks, extra surveillances to make sure that the rules are being followed. So it's really important that age assurance is also used to provide kids with positive things," he said. "If age assurance is used to provide children with high levels of default privacy, better default settings, then that’s something that people want to engage in."
'A paradigm shift'
Murray said the ICO is starting to see examples of good practices in compliance with the U.K. Children’s Code, while McCann noted 5Rights has documented "hundreds of changes across the major platforms."
With the proposals in the U.S., Murray said "a paradigm shift" is underway and change is going to happen quickly. The ICO recently released draft guidance for services "likely to be accessed by children" within the scope of the Children's Code, including answers to frequently asked questions, information on how to assess whether children are likely to access a service and case study examples.
"The main thing we'd like to see is a more holistic approach to compliance," McCann said. "What we see is the changes that really make a difference are things like stopping adults from being able to contact children, providing better default privacy settings, providing nudges to take a break, none of these things broke any of the services, but, ultimately, provided children with a better, safer experience and those are the kind of things that we really want to see more of."
Children’s Privacy and Safety intends to help the practitioner and advocate on their journey to protect children’s data privacy and safety. The privacy laws and safety protections impacting children vary from country to country and across industries and data types, making it hard to apply a singular global approach. This comprehensive treatise is intended to provide reliable and substantive background on children’s privacy, data protection, and safety issues.
If you want to comment on this post, you need to login.