Guidelines on the interactions between the EU Data Protection Regulation and the Artificial Intelligence Act should be ready by early next year, according to the European Data Protection Supervisor.

In an appearance at the IAPP Europe Data Protection Congress 2025, Wojciech Wiewiórowski indicated work on the guidance was ongoing, but it would be a joint effort between the European Data Protection Board and the European Commission. Their cooperation signals some eventual agreement on how the two laws interact at a time when Europe's digital regulatory landscape is poised for upheaval following the introduction of omnibus simplification packages for the GDPR, the AI Act and other parts of the bloc's digital rulebook.

Wiewiórowski, who noted he had not read the entire omnibus bill released 19 Nov. but had been following along with various leaked drafts, said he did not expect the legislation to result in "big changes" between how the Commission and data protection authorities interact. 

His comments come nearly a year after the EDPB called for consistency between EU digital laws and the GDPR and hinted it might work with the Commission to do so. Wiewiórowski spoke during a panel examining AI liability along the value chain, which is governed by Article 25 of the AI Act and is set to come into effect August 2026.

Differences over AI liability

While the EU considers targeted changes to its digital laws, relevant EU AI authorities still don't have full alignment on how to address some perceived legal gaps. AI liability is among the debatable topics.

Wiewiórowski said he sees a need for clarity around non-contractual liability and had hoped it would be addressed by the Commission after the AI Act was passed. That was almost the case, but the Commission opted to abandon its proposal to manage liability in early 2025, saying it could not reach member agreement on the issue.

"We face this situation in 2025, where this act has been put in the freezer," Wiewiórowski said. "This harmonization is something that we really need."

Since then, regulators have been left in a fragmented landscape trying to answer the liability question, Wiewiórowski said, noting other directives aimed at defective products and machines do not answer specific AI-related questions. He argued non-contractual liability issues should be brought in line with Article 25 of the AI Act to provide certainty to regulators.

That clarity is necessary because the value chain is so complicated, according to Microsoft Office of Responsible AI Senior Strategist Anahita Valakche. She said her company often has several models being fine-tuned and then combined into different systems, which often then interact with other systems. Those complex situations need to be addressed, rather than treating AI relationships as limited to a single provider and deployer.

Valakche noted there is pending guidance on several issues related to the AI Act, like notification of systems and high-risk classifications. She said regulators should work to clarify and ensure what an upstream provider of AI owes a deployer.

"I think we need more consistency across all these guidelines, so that they are actually, more holistically, providing clarity and staying consistent," Valakche said.

Caitlin Andrews is a staff writer for the IAPP.