The Artificial Intelligence Liability Directive – Time to catch up

The AI Act has introduced a number of obligations for AI systems, and the New Product Liability Directive (the “PLD”) will set out liability rules for defective software and, by extension, AI products. In September 2022, the EU Commission also issued a proposal for a directive on adapting non-contractual civil liability rules to artificial intelligence (“AILD”).

The AILD is intended to introduce additional liability rules for AI risks and will lay down common rules on (i) disclosure of evidence for high-risk AI systems; and (ii) the burden of proof in the case of non-contractual fault-based claims for damages caused by an AI system. The AILD will simplify the process for individuals seeking compensation for damages caused by AI systems. At the time the AILD was published, the European Parliament requested a complementary impact assessment of the AILD be undertaken. This impact assessment (the “Impact Assessment”) was published on 19 September 2024 by the European Parliament Research Service (“EPRS”) and indicates certain areas in the AILD which may need further consideration.

Recommendations

The Impact Assessment sets out a number of recommendations for the AILD, including:

  • A new classification for AI systems should be included in the AILD to cover both high-risk AI systems and additional AI systems, such as general purpose AI systems and autonomous vehicles (to be referred to as ‘high-impact AI systems’).
  • The AILD in its current form does not adequately cover general-purpose AI systems, which leaves a gap for harms such as non-discrimination and hate speech or fake news. Generative AI systems should be classified under the suggested ‘high-impact AI system’ category, along with high-risk AI systems, which would bring them under Article 3 and Article 4(2-4) of the AILD.
  • AI systems banned under Article 5 of the AI Act should attract strict liability for any damages they cause.
  • It should be made clear in the AILD that the AILD applies to cases of liability arising from discrimination. Evidence disclosure mechanisms are needed to allow access to data and algorithms critical to proving discrimination.
  • The AILD currently addresses AI liability but does not address similar challenges present in many types of non-AI software. The PLD applies to software more generally. However, the AILD should also be expanded into a more comprehensive software liability instrument. This would ensure that burdens of proof, evidence disclosure mechanisms, causality and rebuttable presumptions would apply uniformly to all software applications regardless of whether they are classified as an AI system or not.
  • A strict liability model should be adopted for high-risk AI systems under the AILD, which distinguishes between AI systems which cause legitimate and illegitimate harm.
  • The AILD should be transitioned into a regulation which would prevent market fragmentation, enhance clarity and promote innovation and consumer protection by establishing consistent legal standards across the digital single market.

Key Recommendations

We set out below further detail on some of the key recommendations in the Impact Assessment:

1. Move from AI Focus to Software Focus

The Impact Assessment considers whether the AILD should be aligned with the PLD in its material scope, which would include software. The Impact Assessment points out that it is difficult to justify the application of one set of rules to non-AI software-caused harms under the PLD (strict liability) and the application of another set of rules to AI systems under the AILD (fault-based liability). The Impact Assessment argues that this difference would only be justifiable if the harms covered by the PLD were more significant that those to be covered by the AILD, but that this is not the case.

On that basis, the Impact Assessment recommends that the AILD be rebranded as a ‘software liability instrument’ and should incorporate the definition of software from the PLD.

If legislators choose to maintain the AILD focus on AI alone, the Impact Assessment recommends that clear guidelines be published that delineate which systems would be considered to be AI under the AI Act, including specific examples, to mitigate the challenges posed by the broad definition of AI under the AI Act.

2. Extension of Scope of AILD

The Impact Assessment recommends that the legal consequences for ‘high-risk AI systems’ under the AILD should be extended to other AI systems, while maintaining the alignment of the concept of a ‘high-risk AI system’ between the AI Act and the AILD. The Impact Assessment recommends that the scope of application of Article 3 (Disclosure of evidence and rebuttable presumption of non-compliance) and Article 4(2-4) (Rebuttable presumption of a causal link in the case of fault) of the AILD should be expanded to a wider array of AI models which could pose significant risks, including:

  • General-purpose AI systems;
  • AI systems which fall under existing legal frameworks, such as:
    • autonomous vehicles;
    • transportation-related AI systems;
    • AI systems falling under Annex I, Section B of the AI Act; and
  • AI relating to insurance, save for health insurance and life insurance.

The Impact Assessment proposes that these AI systems, along with high-risk AI systems, form a new category under the AILD, ‘high-impact AI systems’, to shift the current focus of the AILD with respect to high-risk AI systems.

3. Directive to Regulation  

The Impact Assessment highlights that fragmentation is a risk for the AILD, which could result in diverging national transpositions and lead to discrepancies between Member States. This could also mean a complicated legal environment for AI developers and for consumers seeking remedies. The adoption of a regulation for AI liability would provide a consistent legal framework and ensure that legal protections in place are more easily understood and remedies can be more easily pursued. This would follow the general trend in EU legislation of transitioning from directives to regulations to avoid fragmentation.

4. Mixed Liability Framework

Currently, the AILD takes a fault-based liability approach while the PLD is largely based on a no-fault or strict liability approach. Together the AILD and the PLD aim to provide clear paths for redress for individuals. However, the European Parliament previously advocated for a strict liability regime for high-risk AI systems and a presumption of negligence for all other AI systems. Under this Impact Assessment, the EPRS also seems to take the view that a two-track approach is the most appropriate to address harm caused by AI.  The Impact Assessment notes that should a strict liability regime be introduced for AI systems under the AILD, it would have to distinguish between:

  • Illegitimate harm models (AI systems which should not cause harm during their proper operation, e.g. medical AI or autonomous vehicles); and
  • Legitimate harm models (AI systems which are designed to cause a form of harm in their proper functioning – e.g. assigning credit scores or recruitment, which may lead to loss of opportunities for some individuals).

However, the Impact Assessment notes that although strict liability could simplify compensation processes for harms relating to AI systems, this must be weighed against the potential negative effect strict liability would have on AI innovation and deployment.

Conclusion

Whether the recommendations of the Impact Assessment are incorporated into the AILD remains to be seen. However, it appears likely that a rethink of the AILD may be on the horizon. This may include an expansion of the liability regime to encompass software, or a change from a directive to a regulation to ensure uniformity of application across the EU.

This document has been prepared by McCann FitzGerald LLP for general guidance only and should not be regarded as a substitute for professional advice. Such advice should always be taken before acting on any of the matters discussed.