Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.
The European Union has responded to the disruptive potential of widespread AI integration into everything from health care to finance with a robust wave of legal and regulatory reforms.
At the heart of this evolving legal landscape lies a pivotal question: Should AI systems be classified as products or services? This is not a matter of semantics. The answer carries significant consequences for how liability is assigned, how consumers are protected, and how safety standards are enforced throughout the AI lifecycle.
The now-withdrawn AI Liability Directive, originally proposed in 2022, sought to harmonize fault-based liability rules across member states, addressing the fragmented landscape of national tort laws. Its aim was to adapt non-contractual civil liability frameworks to the unique characteristics of AI systems — especially those deemed high-risk under the EU AI Act. By introducing mechanisms such as a rebuttable presumption of causality and enhanced access to evidence, the directive attempted to ease the burden of proof for claimants harmed by AI-driven decisions.
However, with its formal withdrawal in early 2025, the EU has left a regulatory vacuum in fault-based liability for AI. The revised Product Liability Directive now extends strict liability for defective products to include software and AI systems, but it does not address negligence or unlawful conduct. This raises critical questions about how courts and regulators will navigate the dichotomy between product-based and service-based AI and how liability will be assigned in the absence of harmonized fault-based rules.
AI as product under EU Law: The rise of strict liability and the role of the Product Liability Directive
The AI Act adopts a product-centric approach to AI systems, aligning the AI regulatory treatment with existing frameworks for product safety.
By doing so, it reinforces the notion that AI, particularly high-risk systems, should be subject to the same obligations and oversight as physical goods. This approach facilitates the application of strict liability rules, holding manufacturers accountable for damages caused by defective AI systems, regardless of fault.
This strict liability regime now benefits from enhanced harmonization under the revised Product Liability Directive, which explicitly extends its scope to include software and AI systems. Under this framework, the manufacturer — broadly defined to include developers, importers and authorized representatives — is liable for harm caused to consumers by defective AI products, without requiring the injured party to prove negligence or breach of duty.
However, the legal pathway shifts when harm arises from the conduct of other actors in the AI supply chain — such as deployers, integrators or service providers — who fail to apply an appropriate standard of care. In such cases, victims must seek compensation through national civil tort law, which remains unharmonized across the EU. This creates a fragmented landscape in which liability depends on the jurisdiction where the harm occurred and where the burden of proof may be significantly higher.
Cross-border litigation and enforcement challenges in AI liability
The fragmented nature of liability regimes across the EU presents significant challenges for individuals seeking redress for harm caused by AI systems, particularly when those systems operate across borders.
The divergence of different legal systems creates legal uncertainty for both claimants and businesses. Victims must navigate different procedural rules, evidentiary standards, and definitions of negligence depending on the jurisdiction in which the harm occurred. For AI systems deployed transnationally, this can result in inconsistent outcomes and barriers to justice.
Moreover, the absence of a harmonized fault-based liability framework complicates regulatory enforcement. National authorities may interpret compliance obligations differently, leading to uneven application of safety standards and risk management requirements. This may undermine the EU's broader goal of creating a unified digital market governed by coherent legal principles.
The withdrawal of the AI Liability Directive exacerbates these challenges. Without a common set of rules to govern fault-based claims, the burden falls on national courts to interpret complex AI-related cases through existing legal doctrines, many of which were not designed to address the opacity, autonomy, and adaptiveness of modern AI systems.
In this context, the classification of AI as either a product or a service becomes more than a theoretical exercise; it directly influences the jurisdictional reach, legal remedies and enforcement mechanisms available to victims. A coherent and predictable liability framework is essential not only for protecting individuals, but also for fostering innovation and trust in AI technologies across the EU.
AI's operational variability and the case for service-based liability
The strict liability regime has traditionally applied to industrial and mass-manufactured products, items whose performance is expected to be consistent across identical units and use cases. In contrast, service provision is inherently variable, relying on the skills, judgment, and expertise of individuals, even when processes are standardized. This distinction has been affirmed in multiple decisions by the Court of Justice of the European Union, such as the 2001 judgement in Henning Veedfald v. Århus Amtskommune, which emphasize that liability for services is typically fault-based and grounded in the breach of a duty of care.
In the context of AI, this dichotomy becomes increasingly complex. AI systems, particularly those involving machine learning, may not produce identical outcomes when deployed in different environments or when used by different operators. The AI Act itself acknowledges this variability, suggesting results may be altered if the system is used outside its intended scope or contrary to its instructions. This raises a critical question: Is AI truly a product if its performance depends so heavily on context, configuration, and human interaction?
Unlike traditional products, AI outcomes can be influenced by the skill of engineers who design and train the model; the expertise of professionals who deploy, test, and evaluate the system; the quality of data inputs; and the operational environment.
Such dependencies align more closely with service liability, where harm is assessed based on whether the provider exercised appropriate care and competence. In this framework, liability arises not from a defect in the product, but from a failure to meet professional standards, a breach that must be proven by the claimant under national tort law.
This operational variability challenges the adequacy of strict liability alone in addressing AI-related harm. It suggests the need for a hybrid liability model, one that accounts for both product defects and professional negligence. Without such a model, victims may struggle to obtain redress when harm results not from the AI system itself, but from how it was implemented, maintained, or interpreted.
Conclusion
The EU's evolving AI legal framework exposes a key tension between product-based and service-based liability. While the revised Product Liability Directive offers a harmonized approach to strict liability, it fails to capture harms arising from negligent conduct within the AI supply chain. Adaptive, context-sensitive systems often behave more like services than products, making traditional liability models inadequate.
Without a harmonized fault-based regime, victims face fragmented national laws and limited access to justice. The withdrawal of the AI Liability Directive has deepened this gap. Reinstating a revised directive with clear obligations, access to evidence, and rebuttable presumptions would restore balance and ensure meaningful, effective redress for those harmed by AI systems.
Petruta Pirvan, AIGP, CIPP/E, CIPP/US, CIPM, FIP, is managing partner of EU Digital Partners.
