TOTAL: {[ getCartTotalCost() | currencyFilter ]} Update cart for total shopping_basket Checkout

""

Guess what? The data protection law implications of artificial intelligence feature high on the agenda of the upcoming IAPP Data Protection Intensive: Deutschland 2023. Panelists will try to bring AI back to reality — and this reality lies in the EU General Data Protection Regulation and other existing data protection laws, rather than in the upcoming EU AI Act. In the opening and closing general sessions, for instance, we will discuss how data protection authorities are enforcing AI today — as part of the German regulators’ joint task force on ChatGPT, or otherwise — with leading German regulators.

As we have all witnessed, the recent rapid expansion of AI applications across various industries and aspects of life has increased attention on data protection and privacy concerns. As AI adoption becomes more widespread, especially with the introduction of large language models like ChatGPT, data privacy questions become more pronounced.

Although AI is a dynamic and rapidly changing technology, at present the same legal framework applies to it in relation to processing personal data as to any other technology. Despite the upcoming AI Act, the GDPR still plays a crucial role in shaping the legal landscape. Therefore, it is useful to highlight some core GDPR principles which apply to AI-based technologies of all forms and types.

Fairness and nondiscrimination

The fairness principle demands personal data be processed in line with the interests and reasonable expectations of data subjects. The fairness principle also forbids discrimination during data processing. When deploying AI systems, developers and users must ensure the technology operates without discrimination. This becomes challenging with AI models that utilize complex data sets, as the decision-making process can be opaque, often as a result of the infamous "black box."

For developers, ensuring fairness requires using reliable, representative and nondiscriminatory training data, meaning data governance practices will be pivotal. Transparency, itself a key aspect of fairness, is also crucial. Developers must provide clear explanations for decisions and disclose the underlying logic. Operators of AI systems are responsible for continuously assessing potential discriminatory outcomes and conducting internal or external audits.

Transparency and explainability

The transparency principle mandates that personal data processing be understandable to data subjects. Achieving transparency in AI systems is challenging, especially with complex LLMs. Explainability, which requires understandable descriptions of data processing steps, application context and implications of AI-generated results, is key. Transparency obligations must be considered from the inception of AI system development to avoid "black box" issues. Certification processes and innovative information presentation methods, like layered privacy information, can aid in meeting transparency requirements.

Purpose limitation and data minimization

The purpose limitation principle specifies that personal data can, with some exceptions, only be processed for the purposes for which it was collected. For general AI systems, defining processing purposes during initial data collection can be complex and developers will need to carefully consider the usage purposes of their tools at the earliest possible stage.

Data minimization challenges arise when AI training requires vast data sets, potentially conflicting with the principle. Techniques like synthetic training data or federated learning to reduce data usage while maintaining AI performance might be a possible way forward.

Accuracy and accountability

As with any processing of personal data, data processed by AI systems must also be accurate and up to date. Errors in AI-generated content, such as misinformation produced by LLMs, can have significant consequences. Developers must ensure systems prevent or clearly indicate erroneous outputs. Operators should check AI system outputs and, for higher risk systems, undertake comprehensive DPIAs.

This very high-level summary clearly shows the GDPR needs to be considered from day one when developing and operating AI systems. Therefore, the principle of data protection by design and by default enshrined in Article 25 of the GDPR will take center stage and see heightened enforcement activity.

Importance of DPIAs in AI

Another important tool to ensure GDPR compliance, especially for high-risk AI systems, are DPIAs. AI's complex nature requires thorough assessment of data processing risks and mitigations. DPIAs enhance accountability, transparency and compliance with GDPR principles. They help developers and users identify and address potential privacy and data protection issues from the outset, facilitating fair and transparent AI usage. Adopting a risk-based approach, developers and operators of AI technologies will have to carefully examine whether the AI they have created or are using fulfils the requirements of the GDPR. DPIAs are the tool which will enable them to make, and evidence, that assessment.

So, lots to talk about during the upcoming IAPP Data Protection Intensive: Deutschland 2023.


Approved
CDPO, CDPO/BR, CDPO/FR, CIPM, CIPP/A, CIPP/C, CIPP/E, CIPP/G, CIPP/US, CIPT, LGPD
Credits: 1

Submit for CPEs

Comments

If you want to comment on this post, you need to login.