In a packed room at the IAPP Data Protection Intensive: UK 2024 in London, U.K. Information Commissioner John Edwards kicked things off with a punchy speech and Q&A with attendees, laying out the agency's priorities for 2024. Undergirding his regulatory philosophy is the need to protect personal information "while still championing innovation and privacy-respectful practices."
Children's privacy and third-party advertising cookies were among the top three priorities outlined by the commissioner. However, the "biggest question on my desk," Edwards said, is artificial intelligence.
The not-so-small question: artificial intelligence
Edwards was clear that the U.K. General Data Protection Regulation provides the agency with plenty of enforcement ammunition in the AI space and that "bespoke" AI regulation is not necessary. But the rise of AI also raises questions, including those around user control, loss of sensitive personal information and potential regulatory gaps.
Add to that the emergence of generative AI, and an additional layer of questions must be asked: "When is it lawful to scrape data from the internet to train generative AI models? Are people's rights being meaningfully protected when AI models are built using their information? What do concepts such as purpose limitation and accuracy really mean in this context?"
In a Q&A after his prepared speech, Edwards carried that sentiment forward, saying, "We won't be talking about AI regulation in a few years" because AI will be connected to virtually all aspects of the economy and society. "While there are lots of questions to consider," Edwards said, "what is clear is that any generative AI model must be developed and used in a way which complies with the U.K.'s GDPR."
For starters, Edwards said the agency has opened a consultation series on generative AI, "outlining our initial thoughts and approaches, seeking to provide clarity in a changing technological landscape." For more clarity, the ICO's consultation focuses on different parts of the law, as in the first chapter, which explores the lawful bases used for scraping public data from the internet in order to train generative AI models.
"Our initial thoughts are that the legitimate interests lawful basis may be valid," Edwards said, "but only if the model's developer can ensure they pass the three-part test of purpose, necessity and balancing people's rights with the rights of the developer."
The ICO published the second chapter earlier this week, which explores purpose limitation and whether it "should be applied at different stages of the AI lifecycle," he said. Future chapters will include the ICO's expectations around compliance with accuracy, accountability and "controllership across the supply chain."
For this consultation series, Edwards asked the privacy and AI governance community for their thoughts and insights.
Edwards said regulation of AI is not just for the ICO alone, but crosses the data protection, competition and consumer protection fields. The agency is "working closely with the CMA on a joint statement setting out our positions concerning the foundation models that underpin generative AI." He said the ICO hopes to publish the joint statement later this year.
Separately, the ICO is working with the Financial Conduct Authority, Office of Communications, and Competition and Markets Authority "to support the responsible development of AI," which includes a pilot "AI and Digital Hub" service that "will provide tailored regulatory support to innovators" who are bringing new products and services to market, Edwards said.
Biometrics: Not below the ICO's radar
A subset of AI regulation, biometrics is also in the ICO's sights, as was evidenced by its recent enforcement action against Serco Leisure, in which it ordered the company to stop using facial recognition to monitor employee attendance, which had been tied to their pay. According to the ICO, Serco did not offer employees with a genuine choice or alternative means to log their hours, "which increased the imbalance of power between the employer and employees."
Though it is one example, Edwards said, "Biometrics is an area of developing interest, and organizations need to know how to use this technology in a way that doesn't interfere with people's rights."
Are the kids alright?
Children's privacy is a growing area of interest for the ICO, Edwards said, with resounding questions, such as: "Should the social media platforms allow users under the age of 16 to have accounts? Should they have stronger checks and balances, or does that responsibility lie with parents?"
Though Edwards said such issues will involve broader societal input, he specifically warned that "children's privacy will form a large part of our ongoing work this year." Now two years in, the Children's code has prompted positive changes in the online ecosystem, according to Edwards, who said that "some of the largest online platforms have improved their default settings, reduced targeted advertising and included parental controls."
Edwards said the ICO has been working with industry via voluntary audits, one-to-one engagement on data protection impact assessments and enforcement "to help them get it right." The agency also recently published an updated opinion on age assurance in tandem with Ofcom and continues cross-regulatory work "to ensure our priorities remain aligned as the Online Safety Act comes into force." This also includes further work with Ofcom on content moderation guidance.
Notably, there is enforcement in the works. Edwards did not get into details but said the ICO's cases against Snap and TikTok "are ongoing, and there are several other investigations underway that I can't give details about."
Cookie banners: 'A daily reminder of the lack of real choice'
The ICO has long worked in the advertising technology space, and this year, Edwards said the agency "will be prioritizing fair use of cookies." He said, "cookie banners are the most visible manifestation of data protection law" and often demonstrate "the power imbalance we face when we go online."
The ICO looked at the top 100 websites in the U.K. and found 53 may have used "non-compliant cookie banners." Those companies went on notice, he said, and of those 53, 38 complied, an 80% success rate.
When asked about the other 20%, Edwards did not go into details, but suggested the noncompliant sites may face additional scrutiny.
To help individuals navigate the complexity of the adtech space, Edwards announced the ICO is hosting its "first-ever hackathon, with internal colleagues and external technical experts, focusing on how we can monitor and regulate cookie compliance at scale." The goal, Edwards said, is to create "the prototype of an automated tool" to assess cookie banners across the web and alert when they breach data protection law.
"Our bots are coming for your bots," he warned.