Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.
At the 47th Annual Global Privacy Assembly in Seoul, the correlation between data privacy and agentic artificial intelligence was a key topic of discussion.
Agentic AI, which refers to systems capable of autonomous decision-making and related actions, has emerged as the latest juicy development among new technologies.
While holding fresh and succulent promise, it also has the potential to sour data risks.
Below are some key varietals of privacy implications posed by agentic AI.
Transparency and explainability. Agentic AI decisions are wrapped in layers of complexity, making it difficult for individuals to understand how outcomes ripen. Just as a grape ferments into wine, AI systems can process data in ways that obscure the original inputs.
With this lack of explainability, individuals may be unable to exercise meaningful control over their personal data. Regulators should therefore ensure transparency mechanisms remain sufficiently robust and data subject rights can be harvested.
Purpose limitation and data minimization. Agentic AI thrives on data. Rather than cherry-picking apples in the digital orchard of information, however, there is a temptation to collect beyond necessity. This can result in piles of pulp and pits being stored, analyzed and reused without clear justification. This runs counter to the principle of data minimization, which is about extracting the core while grating the rest to discard.
Security. Strong security is essential to protect against any rotten use of agentic AI. Security safeguards should be robust enough to withstand data and model poisoning attacks. Vulnerabilities, like hidden seeds, can fester and lead to breaches.
Fairness and bias. Bias is a seasonal issue that arises when AI systems over-rely on historical patterns. Left unchecked, bias yields discriminatory outcomes that are undesirable and disadvantage certain groups. To achieve a fairness smoothie, a balanced mix of training data is required.
Oversight and governance. Governance frameworks to support agentic AI adoption must be cultivated from start to harvest. While low-hanging fruits, such as privacy impact assessments, are readily available and may be easily deployed, an enterprise-wide shift in attitude and culture which embraces agentic AI takes time to germinate, and the real work lies in adapting to seasonal practices as the technology matures.
Building trust. Ultimately, trust is an essential ingredient in the pie that is agentic AI. Every measurement — what to peel, and blend — affects how users experience these technologies. If systems are built only for efficiency, without attention to privacy, the outcome may be a bitter bite. Conversely, if principles of transparency, minimization, fairness and security are baked into design, the output can be a sweet and refreshing dessert.
Charmian Aw, AIGP, CIPP/A, CIPP/E, CIPP/US, CIPM, FIP, is a partner at Hogan Lovells.
This article originally appeared in the Asia-Pacific Dashboard Digest, a free weekly IAPP newsletter. Subscriptions to this and other IAPP newsletters can be found here.