The proposed EU Artificial Intelligence Act and its intersections with the EU General Data Protection Regulation could present compliance issues for data compliance officers across the continent, according to IAPP Senior Westin Research Fellow Jetty Tielemans.
The AI Act has some similarities with the Digital Services Act and the Digital Markets Act regarding how they clarify the GDPR, Tielemans said during a recent IAPP LinkedIn Live. However, she explained the AI Act differs in that "sensitive data," such as sexual orientation and ethnic origin, could previously only be processed under the GDPR with the data subject's consent. This data can now be processed by an AI system's algorithm provided "certain conditions are met" without meeting the GDPR's consent standard, per Article 10 of the draft legislation.
"That's an area where the AI proposal complements, if you want, or deviates from, depending on how you want to see the GDPR." Tielemans said. “It's quite frightening to hear all the things that compliance officers will need to comply with in Europe, it's not exactly clear to me how all this will work together.”
The AI Act aims to regulate automated machine learning on an international level, while building international cooperation for the development of AI. It is written to feature a risk-based approach for the development and deployment of AI systems, so companies must ascertain their own operational risks for potential threats. There have been more than 3,000 amendments proposed to the legislation since it was introduced in April, according to IAPP Europe Managing Director and event moderator Isabelle Roccia.
German Member of European Parliament Axel Voss' Head of Office and Digital Policy Adviser Kai Zenner said, when examined side-by-side, the GDPR and the proposed AI Act created a double notification requirement on the part of data processors or AI system operators. The AI Act contains an obligation to notify a different government regulator than the GDPR requires in the event of a data breach incident.
“We are really surprised by the condition they didn't allow us to combine those, let's (call them) parallel obligations,” Zenner said. “In the end, it's really just creating additional red tape without any added value for basically anyone.”
Criteo VP of Government Affairs and Public Policy Nathalie Laneret, CIPP/E, CIPM, said the AI Act uses different terminology from the commonly understood data controller/processor language of the GDPR, which may lead to compliance headaches. Under the AI Act, actors include the provider, the deployer and the distributor so the responsible authorities within organizations that develop or sell AI systems may not always be as clear as the roles assumed under the GDPR.
“In AI, you have to look at different phases: the training phases, where you actually train your model and input data to come up with your algorithm and refine it, then the decision phase and the production phase, where you use this model and you apply it to a specific situation and then qualification of controlled processor will not be the same,” Laneret said. There’s "also the situation of machine learning models that also continue to learn and improve on the basis of production data that is really inputted into the model to modify and improve it. Here, it will also be quite challenging to really apply those distinctions that we know from the GDPR, as far as controller, processor and joint controller are concerned.”
The panelists also discussed who in an organization might be the best positioned to handle what could amount to dual reporting if the requirement to notify a separate regulatory body remains in place.
“I think the (data protection officer) is ideally placed, because … a lot of them say our (AI) system will be powered (with) and will involve personal data,” Laneret said. “Also, because the act is built around a risk-based approach and risk management, which is also how the GDPR is built somehow. So, the DPO already knows what it is to do a risk assessment, to deploy a program, to set up a policy, perform an audit and to train the teams.”
Zenner laid out the timeline for the AI Act to make its way through the EU legislative process and requested stakeholders engage with their European Parliament representative about the legislation. He said he was hopeful the law would be finalized by the end of next year, before the 2024 EU Parliament election. First the EU Parliament and European Commission must agree on the legislation's final text, he said.
"My feeling is that in the Parliament, if we are extremely lucky, and we've managed to remove a lot of critical points on technical level already, maybe we will be able to publish a report by Christmas break," Zenner said. "If I would need to guess now, I would say in the European Council, it will be the Swedish presidency, which means (in) the first half of next year we've (hopefully) managed to finalize the legislation before Easter. If we are not lucky and extremely successful in September and October ... this would leave us enough time for the trial to probably start after the Easter break in April. And we will have the nine months to complete the trial."
If you want to comment on this post, you need to login.