During a LinkedIn Live session 27 Sept., IAPP Research and Insights Director Joe Jones discussed the latest regulatory law, policy and enforcement developments, and compliance considerations for children's privacy in the EU, U.K. and U.S. with Baker McKenzie's Elizabeth Denham, Lothar Determann and Jonathan Tam. While panelists addressed several viewer questions during the discussion, the following questions and answers are those that could not be covered during the live panel.
Editor's note: Respondents are speaking on their own behalf, and not on behalf of their law firm, law schools, clients, or other organizations.
Viewer: Are recent fines — including a 345 million euro fine against TikTok by Ireland's Data Protection Commission — large enough? Often, we see big companies just take risks, pay fines and move on without much change.
Determann: The recently imposed, hefty fines based on the EU General Data Protection Regulation relating to the processing of children's data seem inappropriate and ineffective for a number of reasons. I mean this in general, and without commenting on any particular case or company.
It is telling that the competent authority in several cases did not consider it appropriate to impose fines at this stage. The authority wanted to work with industry on a more collaborative basis. First, the legal requirements relating to the processing of children's data are quite unclear and not uniformly applied around Europe, even though the GDPR and its predecessor laws have been in effect, some for 50-plus years.
Second, the GDPR is intended to protect "rights and freedoms" of data subjects, not merely privacy — a word that does not appear even once in the otherwise extremely wordy regulation. Outsized punishments for companies that allow children to use their charge-free services are likely to limit access to such services for children going forward, thereby excluding children from rights and freedoms more than protecting them.
Third, fines can only be effective when they penalize clear violations of clear laws. Governments impose fines for a variety of reasons, including to deter future violations by the fined company, to deter others from committing the same violation, and to punish the offender. Fines can only reach these objectives if the law is sufficiently clear and developed.
This is not yet the case with respect to many details of most children's privacy laws. Applicable age limits vary across jurisdictions and even within the EU — from 13 to 16 under Article 8 of the GDPR, but only where companies rely on consent in connection with "information society services," otherwise national rules on contract formation prevail. In Germany, minors can form "everyday contracts" when they turn 7 years old.
Guidance from authorities on how much personal information can, should or must be collected from users has been all over the place, from Germany requiring online platforms to allow anonymous use of online services to other authorities warning companies they should not insist on "hard IDs" unless absolutely necessary.
While such ambiguities and policy inconsistencies prevail, authorities should first cooperate with companies to develop reasonable policies and measures, then harmonize and clarify official guidance, issue clear and specific regulations, warn particular offenders about clear-cut violations, and only in the last instance, impose fines. My sense is that authorities in the EU skipped through to the last step — imposing fines — and left companies guessing about what they can or should do under the GDPR and national laws in various member states. The size of the fine alone cannot achieve policy purposes if legal requirements remain unclear and a work in progress. Larger, successful companies can probably absorb the fines and move on, but smaller companies and potential new market entrants will be deterred, which may lead to less competition, less funding for innovation in Europe, and reduced availability of online services to children. In their interest, I am hoping that the recent fines will be appealed and thrown out in court.
Denham: I agree that fines are not always the most effective regulatory tool to change behavior and impact business models in the direction of increased protection for individuals and groups, including children. When I was information commissioner in the U.K., I often said fines should be the last resort. However, regulators have a duty and imperative to use all the tools in their toolbox when undertaking their oversight.
Without commenting on any specific fine or action in the context of children's privacy, the privacy engineering and "by design" approach of the U.K. Age-Appropriate Design Code ushered in more robust changes in large platforms and services than any single fine. When the code came into effect, global players made significant design changes, many of which were rolled out globally.
Viewer: Should the onus of age assurance be on telecommunications or broadband service providers, or the makers of smartphones and other devices?
Tam: I don’t think so. When someone goes to a telecoms or broadband service provider to procure internet access, they could be doing so on behalf of a household or organization. If the onus were on telecoms and broadband service providers, they would need ongoing and up-to-date information about everyone who could access the internet through that person's account before provisioning network access, which would add an inordinate amount of friction to the process of procuring, providing and maintaining internet access. Same goes with smartphones and other devices, which may be used by multiple users, and may have many different owners throughout their life cycle.
Telecoms companies and device manufacturers may have a role to play in harm reduction, and coordination among different organizations is important to effective age verification, but I think that the operators of online services are best positioned to select and administer an age-assurance process that is appropriate to the risks raised by their online services.
Determann: Collaborating with gatekeepers could help limit the number of companies that have to collect age assurance information and simplify choices and disclosures for consumers. In 2012, then California attorney general Kamala Harris brokered an agreement with tech platform operators on mobile app privacy settings that has been very successful. If parents could ensure that devices, routers, phone lines, internet access services and other accounts set up for their kids broadcast signals to block age-inappropriate content from any online service providers whose services are accessed with the device, these could then adhere to such signals and might not have to collect much additional information. But such signals could also create additional safety risks, kids will surely find ways to circumvent them, and this approach is less promising for mixed-use devices or services.
Viewer: Is "two-factor" age assurance a thing?
Determann: Yes, companies often can and should observe multiple factors, including the age a user gives in response to neutral prompts, any age the same user specified in other contexts — in the context of other services offered by the same company, for instance — and other age clues, such as in user-generated content or requests.
Tam: Considering the whole picture is important to comply with U.S. law. U.S. Federal Trade Commission guidance regarding the Children's Online Privacy and Protection Act notes that an organization may be considered to have actual knowledge of a child's age, and therefore be subject to COPPA, "where a child announces her age under certain circumstances, for example, if you monitor user posts, if a responsible member of your organization sees the post, or if someone alerts you to the post (e.g., a concerned parent who learns that his child is participating on your site)." State privacy laws such as the California Consumer Privacy Act also state that a person is deemed to have actual knowledge of a child's age if the person willfully disregards their age.
Viewer: Are there any artificial intelligence solutions which will detect the children's age by the manner of texting, words, the videogames played?
Determann: Yes, AI is good at pattern recognition and could flag accounts that raise concerns for additional age-assurance measures.
Denham: I agree that AI solutions within companies and the maturation of the age-estimation industry plus standards will assist in the determination or estimation of age — and will ultimately be more privacy protective than collection of hard identifiers.
Viewer: How do we protect children's privacy online without invading online privacy?
Determann: Governments, companies, parents, schools and children have to consider trade-offs and weigh risks and benefits of each online service individually.
Tam: A balancing act is necessary, for sure. What constitutes an "invasion" of privacy depends on many factors, including the reasonable expectations of the data subject, and in some cases, their parent.
As a parent, I would find it reasonable for an online service provider to process personal information about my child as necessary to protect them from clearly harmful content, contacts and conduct, even if this means collecting more information upfront so that the online service provider knows it's dealing with a child.
Denham: The debate around tensions between age assurance and data privacy are often too binary. It is not a case of privacy versus safety, it is a case of finding the appropriate balance between the two given the specific services and user populations. I firmly believe if a company can present evidence of its risk assessment and thought process in rolling out an age assurance approach, that will be considered by the regulator in the case of an intervention or complaint investigation. Companies that collect the minimum amount of personal information necessary to determine the age assurance experience will be in good shape.
Viewer: Are companies expected to adapt content accessibility for different ages of children, like under 13 and 13 to 18?
Determann: Yes, under the U.K. Children's Code and the California Age-Appropriate Design Code Act. On 18 Sept. 2023, however, a federal court in California issued a preliminary injunction against the enforcement of CAADCA, because the court found the statute largely unconstitutional.
Denham: Yes, one of the main objectives of the U.K. Age-Appropriate Design Code and California's code is to provide children with the most age-appropriate experience, not to keep them off the internet. There is also a policy goal to respect and build children and young people's agency online. This approach recognizes the principles and purposes in the United Nation's Convention on the Rights of the Child.
Children should have the same opportunities online as they enjoy offline. For generations we have developed policies and laws to protect minors, with the understanding that, at particular ages, they are competent to decide for themselves, consent for themselves, and participate in services and opportunities appropriate for them.
Viewer: Does COPPA preempt state laws?
Determann and Tam: Federal laws generally preempt state laws that contradict or conflict with its rules. For example, COPPA would preempt a state law that sets the age threshold for parental consent at 7 years old, since COPPA sets the age threshold for parental consent at 13 (i.e., covered entities must obtain parental consent to process the personal information of a child under the age of 13). But COPPA does not contain an expressly broad preemption clause, like, for example, Section 230 of the Communications Decency Act. On 18 Sept. 2023, a federal court in California issued a preliminary injunction against the enforcement of CAADCA on constitutional grounds and also did not find COPPA violations.
Viewer: Are provisions in the U.K.'s Online Safety Bill and the concept of breaking encryption to require messaging apps to scan for illegal content where "technically feasible" necessary and justifiable to protect children's privacy?
Determann: Requirements on companies for breaking of encryption are generally very problematic from a security and privacy perspective. Where companies already decrypt communications for other purposes — like deep packet inspection for safety and anti-fraud measures — it may be more palatable to add a requirement to also watch out for children's safety and privacy. But, by demanding encryption backdoors, governments probably do more harm to privacy and security of children and adults than protect children's privacy.
Denham: This is a challenging debate in the U.K., EU and other jurisdictions around the world. There are other measures and approaches to protect children online — through comprehensive policies and practices, and cooperation with law enforcement agencies — that address safety issues rather than creating back doors for this purpose. Experts in security have long argued that building back doors for law enforcement creates significant risks for the security of our systems, hackers can exploit the same back doors.
Viewer: Is the age of 18 for parental consent appropriate, for example, in India's new Digital Personal Data Protection Act, given that India is a country with such a vast data bank with teenagers using digital platforms?
Determann: Law and policymakers around the world have not found coherent or uniform answers to questions of appropriate age thresholds for parental consent, not just concerning data processing, but many other areas, including alcohol consumption, joining the army, marriage and reproductive health.
Italy's data protection authority, the Garante, recently insisted on a required threshold of 18 for parental consent for online services in an enforcement action against a company, even though there is no clear basis for this in EU and Italian law. Most jurisdictions officially or informally suggest or tolerate lower age thresholds and children use online services regularly in schools at much younger ages. In the midst of so much confusion and inconsistencies, it seems particularly inappropriate to slap large fines on companies that allegedly failed to age-gate appropriately.
Viewer: Should the burden be more on parents instead of companies? Or schools?
Determann and Tam: Lawmakers have different views on this, but we personally believe parents and schools have to take responsibility, too. They know the capabilities, needs, and vulnerabilities of particular children, and are responsible for their well-being.
The U.K. Children's Code and CAADCA include a role for parents but generally require companies to design their services in a way that proactively protects minors' privacy and safety. Some law and policy makers even express concern about risks of harm to children from their parents, and the CAADCA would specifically require online service providers to warn children if their parents track them.
By contrast, some U.S. laws emphasize parents' responsibility to act as gatekeepers to their child's internet access. One notable example is the Utah Social Media Regulation Act, which requires social media companies to provide a parent access to the content and interactions of an account held by their child under 18.
Companies should, therefore, carefully assess the statutes that apply to them in the various geographies in which they have users, as the requirements can vary greatly from jurisdiction to jurisdiction.
Denham: In research conducted by Baker McKenzie in 2022, interviewing over 1,000 policy influencers in six jurisdictions around the world, respondents agreed we need to create a safer place for kids online. But they did not agree who was responsible for this.
For example, policy influencers in the U.K. strongly believed that the companies providing the services were responsible for proactively building their services with children in mind. In the U.S., there was a strong belief that parents played the most important role. These policy preferences are represented in legislation and soft law initiatives around the world. This issue is an example of how culture plays into the drafting and implementation of the law.
Viewer: In the employment context, when we collect and process employee information, we also collect dependent data that can pertain to a child. In that case, do we still need consent from the employee under the GDPR or can we process dependent children's data under other lawful bases, such as performance of contract or legitimate interests?
Determann: Companies should seek consent from data subjects where this is expressly required by law or where they can offer true choices. This is often not the case in the employment context. Where employers process dependent data to provide statutorily required benefits, such as health insurance, the employer can rely on the respective statutes as a lawful basis for data processing. With respect to truly optional benefits, however, which an employee may accept or decline, it may be appropriate to seek voluntary consent and require employees to confirm that they are authorized to declare consent on behalf of their children.
Viewer: What are your thoughts on the vague Kids Online Safety Act, which passed the U.S. Senate Committee on Commerce in July?
Determann: U.S. legislators should take a step back and carefully analyze the opinion of the federal court in California that issued the 18 Sept. preliminary injunction against enforcement of the CAADCA on constitutional grounds.
Tam: KOSA and "COPPA 2.0," two federal bills that enjoy significant bipartisan support in the U.S. legislature, represent increasing political momentum behind regulations that purport to strengthen children's online privacy and safety regulations.
Among other things, KOSA would impose specific requirements intended to protect kids 16 and under, such as providing minors' options to protect their information, disable addictive product features, and opt-out of algorithmic recommendations.
COPPA 2.0 would prohibit the collection of personal information from 13 to 16 year olds without the user's consent and revise COPPA's "actual knowledge" standard to cover online services that are "reasonably likely to be used" by children and users "reasonably likely to be" children or minors.
There are a number of competing legislative priorities at the federal level and, as Lothar Determann mentioned, the bills may raise constitutional issues. But, these two bills are definitely ones to continue monitoring.
Viewer: If we collect children's data unrelated to information society services under Article 8(1) and 6(1)(a) of the GDPR, do we still need parental consent?
Determann: Not in all cases. GDPR's Article 6(1) lists several exceptions from the general prohibition of processing personal data and, in many cases, companies can and should rely on necessity to perform contractual obligations under Article 6(1)(b) or legitimate interests under Article 6(1)(f).
In such cases, parental consent is not necessary under the GDPR, but may be required or appropriate under other laws or from a business perspective. Keep in mind, if you collect special categories of personal data under Article 9(1), consent is generally required and, if the data subject is a child, you may need to obtain consent from the child's parent or guardian, unless you can confirm that the child is legally competent to grant valid consent as a matter of national law.