Kia ora koutou,
The Office of the Privacy Commissioner of New Zealand released guidance on how the Privacy Act's Information Privacy Principles apply to artificial intelligence. The guidance builds on the OPC's expectations, released in June, around the use of generative AI by agencies. Both are timely given the explosion of AI into the public consciousness.
The guidance states very clearly the Privacy Act applies to everyone using AI systems and tools in New Zealand. Privacy impacts from AI can arise whether an agency is developing its own AI systems, using AI to support decision making or has team members who are informally using AI tools, such as ChatGPT, in their work. The guidance adopts a broad view of AI systems, encompassing machine learning, classifier, interpreter, generative and automation systems.
- Collection (IPPs 1-4): The guidance emphasizes the importance of understanding what is in the training data (i.e., the data that trains the AI model, which impacts how the model behaves), how relevant and reliable it is for the agency's intended purpose and whether it is gathered and processed in ways that comply with legal obligations and ethical/responsible approaches.
- Security and retention (IPPs 5 and 9): Agencies need to take appropriate security steps to protect information, particularly given the new and emerging security risks associated with AI, it states. That includes risks of fraud arising from the ease with which anyone can now create deepfakes, simulate voices and automate hacking and phishing campaigns.
- Access and correction (IPPs 6-7): The guidance states it is essential for agencies to develop procedures for responding to requests from individuals to access and correct their personal information processed by AI tools.
- Accuracy (IPP 8): It notes agencies must take reasonable steps to ensure information is accurate, up to date, complete, relevant and not misleading. Beware the risks of "automation blindness," or the tendency of humans to rely on computer outputs at the expense of their own judgment. Detecting accuracy and fairness issues like bias can be challenging and the OPC suggests engaging with experts who can offer an independent perspective as well as the people and communities likely to be harmed from the use of any biased or inaccurate information.
- Use and disclosure (IPPs 10-11): Agencies must clearly identify their purposes for collecting personal information and then limit subsequent use and disclosure of that information to those purposes or a directly related purpose, according to the guidance.
- Overseas disclosure (IPP 12): Agencies must check and confirm any offshore technology providers will not be using personal information in their care for their own purposes, otherwise this will be a disclosure and IPP 12 will apply, the OPC recommends. It includes a reminder that agencies remain responsible for protecting personal information when they use third-party service providers to handle personal information on their behalf (Section 11 of the Privacy Act).
- Unique identifiers (IPP 13): There is scope for AI systems to find patterns in a person's behavior that qualifies as a unique identifier, even if that is not an intended outcome. Such identifiers would then need to be managed in accordance with IPP 13, the guidance states.
While we await specific regulation of AI in the ANZ region, it is helpful to consider — with the benefit of the OPC's guidance — how the New Zealand Privacy Act might already tackle some of the emerging privacy issues and risks created by AI. Simply Privacy Principal and IAPP AI Governance Center Advisory Board member Frith Tweedie discussed the OPC's guidance further and offered insights on the risks AI can create beyond privacy.
Finally, don't forget to join the Wellington KnowledgeNet Chapter 31 Oct. for a virtual panel discussion with privacy and security professionals who will talk about people-centered privacy.
If you want to comment on this post, you need to login.