On 19 June, the U.K. Information Commissioner's Office published its decision on Snap's My AI. This followed a preliminary enforcement notice issued by the ICO 6 Oct. 2023. The PEN contained provisional conclusions that Snap had not met Article 35 of the U.K. General Data Protection Regulation on carrying out a data protection impact assessment or Article 36 on consultation with a supervisory authority if a DPIA shows unmitigated high risks.
The final decision concluded Snap met the requirements of Article 35 and it did not breach Article 36. The decision contains detailed commentary on the ICO's expectations regarding the level of detail to include in DPIAs in general and observations on particular areas of concern when engaging with general-purpose AI and children. As almost all organizations carry out DPIAs, and as many are doing so in relation to the deployment of AI, the decision has wide relevance. The underlying U.K. law on these provisions is still nearly identical to the EU's, so these insights are also likely to be of relevance beyond the U.K.
Facts
My AI is a chatbot that integrates OpenAI's GPT. It collects user-submitted biographical data, user age group that defaults to the lower of declared or inferred age, coarse location data, metadata, interaction data, keywords extracted from queries to serve ads and memories of interactions.
Snap trialed a prerelease to Snapchat+ users 27 Feb. 2023, with a full release to all users 19 April 2023. Snap first engaged with the ICO on My AI in March 2023, with the first substantive interaction after the full release of My AI. The ICO asked Snap to explain how My AI corresponded to key topics in a blog post by Executive Director of Regulatory Risk Stephen Almond and required Snap to share its DPIA.
The ICO issued its PEN to Snap 6 Oct. 2023. In response, Snap submitted representations to the ICO 3 Nov. 2023 with further submissions 11 Dec. 2023, followed by, somewhat unusually, an oral hearing 14 Dec. 2023. Snap also submitted a revised fifth DPIA 22 Nov. 2023. Most of the ICO's decision analyzes this fifth DPIA.
Overall conclusions
The ICO concluded the fifth DPIA, unlike the earlier ones, met the requirements of Article 35. In particular, the fifth DPIA:
- Contained a significantly more detailed breakdown of the processing operations.
- Considered the extent to which general-purpose AI differed from previous technology used by Snap, relevant to the assessment of whether the processing is necessary and proportionate — including an assessment of the technology's impact on the volume of special category data processed.
- Contained a more detailed risk assessment, including risks posed to 13 to 17-year-olds.
- Clearly identified mitigations and explanations of how and to what extent they would be effective.
The original DPIAs shared with the ICO contained a statement noting the processing posed an unmitigated high risk to children, which lead to the provisional finding that Snap had breached Article 36 by not consulting with the ICO. However, the ICO accepted Snap's evidence saying this was an error in the DPIAs and the company had not considered this to be the case.
The decision does not consider how My AI complies with other aspects of data protection legislation.
Detailed requirements for DPIAs
At one page long, the European Commission's Criteria for an Acceptable DPIA in Annex 2 of its DPIA guidelines is succinct, but the corresponding ICO guidance is more extensive. The decision assesses Snap's DPIA against this guidance, calling out areas where the ICO originally felt Snap fell short but where the revised DPIA meets the ICO's guidelines. Many organizations struggle with the areas highlighted by the ICO.
Systematic description of processing
Categories of data. The DPIA must systematically describe the categories of personal data used for each purpose of processing. The revised DPIA contained a step-by-step breakdown, listing the categories of personal data processed.
Access rights. The ICO called out that explaining who has access to what personal data is an essential element of processing and must be adequately addressed. In the revised DPIA, Snap explained what personal data was shared with each of OpenAI and Microsoft, its ad partner, as well as how personal data was shared internally with an explanation of how it determined and managed access controls.
Retention periods. Again, the ICO labeled retention periods as essential, noting a controller cannot assess risks in a DPIA without adequately considering retention periods. The revised DPIA contained retention periods for each category of data and retention periods set out in contracts with processors.
Volume of data processed and geographic scope. References to unspecified numbers of users or data subjects are not sufficient. The revised DPIA contained detailed statistics on numbers of average daily and monthly users in the U.K., EU and globally, and noted the personal data of nonusers would also be processed, such as in content submitted. The DPIA also considered amount of 13 to 17-year-olds whose personal data would be processed.
Context. The ICO expected the DPIA to reflect on wider public concerns over the use of general-purpose AI and pointed to news articles and other resources illustrating the concerns that should be considered. Interestingly, the ICO highlighted the need to consider whether the use of general-purpose AI, in contrast with use of traditional search and chatbot tools, would increase the amount of special category data processed by encouraging users to share more of it. Although, Snap's conclusion in the DPIA was that, in fact, it was more likely for users to be more cautious about sharing sensitive data with general-purpose AI.
Necessity and proportionality of the processing operations in relation to the purposes
The impact of this specific technology. The ICO considered that Snap had not sufficiently considered the impact of the shift from traditional search functionality and online query technology to general-purpose AI. As noted above, the ICO was particularly interested in whether this would alter the nature of the personal data processed.
Lawfulness. Snap referenced consent, contractual necessity and legitimate interests. On special category data, it referenced explicit consent, substantial public interest plus domestic law and freedom of expression derogations. The ICO accepted the points had been addressed but did not undertake a substantive assessment of lawful basis.
Risk assessment
Longer is better. Snap's DPIAs followed the traditional format of noting the likelihood of a risk occurring and its severity when determining the overall risk score. For example, in relation to the risk that responses provided would be biased, inappropriate or potentially misleading, the earlier DPIAs noted the likelihood as probable, the severity as significant and the overall risk score as high. The ICO disliked the cursory, high-level analysis. The revised DPIA contained more detailed risk assessments, padding out the analysis.
Include evidence of alternative measures considered. The revised DPIA contained details of mitigating measures that were considered but not adopted. The ICO concluded this was important to show an effective assessment of risks and how they were mitigated.
What actual risks did the ICO highlight? The ICO said the earlier DPIAs should have considered: the risks of targeting 13 to 17-year-olds for advertising, which are addressed in the revised DPIA on the basis that Snap only serves contextual not targeted ads via My AI, the processing of special category data on a large scale, and the risk that users, especially 13 to 17-year-olds, would be less likely to understand the manner and purpose of processing involving general-purpose AI and therefore may not make fully informed decisions about using the technology.
What additional risks did Snap highlight? The case highlighted risks from responses based on bias, from security breaches if more special category data is processed, from unauthorized use of content, of 13 to 17-year-olds becoming isolated or lonely as a result of excessive reliance on general-purpose AI for emotional support, of users thinking general-purpose AI is in use when it is not — My AI cannot be removed from the top of the chat feed — to safety from using even coarse location data and of third parties having their personal data processed when they did not choose to.
Risk mitigation
Ensure the mitigating step is relevant to the identified risk. The earlier DPIAs contained some mitigating steps that were inaccurate or not relevant to the identified risks. This sounds obvious but can be hard to avoid if you use templates or libraries of suggested steps when completing DPIAs.
Child users can increase the risks: You need child-focused measures. The DPIAs identified risks are compounded when 13 to 17-year-olds use My AI, but the original DPIAs did not explain mitigating measures specific for this risk. The revised DPIA addressed this. For interest, the mitigating measures included age-appropriate content filters, just-in-time notices tailored for this age group, parental control and surveillance of their teen's use of My AI, and mitigations against use of My AI for homework and to write essays.
Do not cross-refer to external documents without explaining their content or effect. The earlier DPIAs referenced other policies that completely eliminated certain risks. The revised DPIA explained the specific steps and measures that would be effective for cross-referencing.
Lastly, during the roll out of My AI, Snap removed a just-in-time notice warning users that the technology was novel and they should not provide confidential or sensitive information. The revised DPIA noted this notice was reinstated.
Thoughts on DPIAs
Some of the points highlighted by the ICO seem obvious. But it is difficult to avoid errors, cross-references and one-word entries if DPIAs are completed at scale across an organization.
Some organizations use DPIAs to assess all new processing initiatives or, if not all, processing initiatives where the level of risk falls below the level of needing a DPIA. Clearly differentiating between "entry level" privacy impact assessments can help avoid adopting processes that struggle to meet more demanding DPIA requirements because they are being deployed on a wider basis.
The incorrect, unmitigated high residual risk designation in the earlier DPIAs should also be avoided, for example by using an automated process to ensure any DPIAs containing this designation are checked to mitigate risk of documentation error.
Lastly, the ICO's preference for DPIAs to spell out the logic and analysis behind risk assessments, to spell out steps from other policy documents rather than cross-referring without explanation and to contain analysis of public areas of concern, e.g., evidenced by media coverage, feel like areas in which both the quality of compliance and the cost of compliance could be significantly improved by the use of large language models.
Wider enforcement lessons
The decision also offers a couple of useful insights for others engaging with the ICO in relation to investigations and enforcement.
Firstly, the decision clarifies that, as the ICO concluded Snap meets the requirements for DPIAs, there are no grounds for it to issue an enforcement notice. Of course, the situation for retrospective sanctions, such as fines or reprimands, is different.
Secondly, the decision outlines how the ICO gathered information in this matter. It initially requested information from Snap on an informal basis. Snap provided certain information this way but declined to provide the requested DPIA. The ICO then issued an information notice requiring Snap to produce all DPIAs for MyAI. Snap provided them with redactions of business-sensitive material, including on the grounds of security, eventually providing full DPIAs. The ICO often engages with companies informally, and it is often more helpful for organizations to do this, as once an information notice is served, there is an obligation to provide the requested information with very little scope to withhold information.
Ruth Boardman is the partner and co-head of Bird & Bird's International Privacy and Data Protection Group.