Kia ora koutou,

Unsurprisingly, concerns about generative artificial intelligence platforms and tools, including ChatGPT, are appearing in Aotearoa’s privacy discourse. Most recently, New Zealand Privacy Commissioner Michael Webster called for privacy regulators to determine how best to protect privacy rights in the face of this evolving technology, and create a space for a conversation about what we want AI to do, and what limitations are necessary to ensure our rights are protected.

This call follows increased activity from data protection authorities around the world in relation to OpenAI's ChatGPT and similar platforms. Italy;s data protection authority, the Garante, banned ChatGPT due to privacy concerns, and the Office of the Privacy Commissioner of Canada launched an investigation into OpenAI. Others, including Germany's Federal Commissioner for Data Protection and Freedom of Information, signaled they will follow suit.

Here in Aotearoa, concerns about generative AI are exacerbated by the need to respect and accommodate our bicultural foundations. The privacy commissioner questioned how AI created and deployed in the U.S. — and accessible across the world — can comply with the privacy laws of other countries. More importantly, asks the commissioner, who ensures ChatGPT respects Māori culture, and the sensitivities and nuances around using Māori data? These are some of the many important questions now being asked about this relatively unknown technology.

Of course, as the commissioner noted, privacy laws apply to the use of ChatGPT just as they do to any other personal information processing. While the New Zealand Privacy Act contains exceptions that permit organizations to collect and use publicly available information, these exceptions are tempered by a requirement to do no harm with that information. For example, Principle 10(1)(d) permits an organization to use publicly available information, provided it would not be unfair or unreasonable in the circumstances to do so. In theory, these provisions should offer some protection against unethical uses of generative AI that sources publicly available information. They essentially codify calls for the responsible use of AI (for an example see this recent article by IAPP ANZ Advisory Board member Frith Tweedie), including considering whether AI outputs could cause harm. However, it is becoming increasingly difficult to assess the effectiveness of such protections when technological advances remain opaque.

It seems likely we will see further regulatory action from the New Zealand Privacy Commissioner in this space, as the risks and dangers of generative AI become clearer or, more worryingly, come to fruition in cases that cause real harm to individuals. As the commissioner lamented, AI can be used to supercharge criminal organizations, leading to more privacy breaches and making it harder for cybersecurity measures to protect personal information, or for post-breach measures such as injunctions to protect stolen data that criminals may make available online.

It is fitting, therefore, that the theme for the upcoming Privacy Week in Aotearoa is "Privacy rights in the digital age." Privacy Week events will give us the forum to continue the important conversations on generative AI that the commissioner called for. On that topic, be sure to register for several IAPP events in the coming weeks, including a virtual KnowledgeNet session with Uber Director of Privacy Engineering, Architecture and Analytics Nishant Bhajaria 11 April, and an in-person KnowledgeNet Happy Hour event for Privacy Week in Wellington 11 May. Finally, a quick reminder that the call for proposals for IAPP ANZ Summit 2023 has been extended to 16 April.

In the meantime, enjoy the digest, stay safe and be kind.
Ngā mihi