Commission proposes new civil liability rules for AI systems
The proposed AI Liability Directive will modernise the EU civil liability framework, laying down uniform rules around civil liability for damage caused by AI systems.
Once in place, the proposed AI Liability Directive (“Directive”)1 will adapt and harmonise certain non-contractual civil liability rules where the damage caused involves the use of AI systems. It aims to:
- ensure that victims of damage caused by an AI system obtain equivalent protection to victims of damage where no AI system is involved;
- reduce legal uncertainty of businesses developing or using AI regarding their possible exposure to liability, particularly in cross-border cases; and
- prevent the emergence of fragmented AI-specific adaptations of national civil liability rules.
The AI Liability Directive complements the upcoming Artificial Intelligence Act (“AI Act”), which is currently making its way through the EU legislative process.2 That legislation aims to reduce risk and prevent damage associated with AI systems. The AI Liability Directive steps in when that damage materialises.
What is an AI system?
The definition of an AI system will come from the upcoming AI Act and is not yet finalised. A recently proposed iteration of the definition describes a system that is designed to operate with elements of autonomy. Based on machine or human-provided data and inputs, it infers how to achieve a given set of objectives using machine learning or logic- and knowledge based approaches. It produces system-generated outputs such as content, predictions, recommendations or decisions, influencing the environments with which the AI system interacts.3
Why is the Directive needed?
Currently, when a person seeks compensation for damage, Member States’ general fault-based liability rules usually require that person to prove a negligent or intentionally damaging act or omission (“fault”) by the person potentially liable for that damage, as well as a causal link between that fault and the relevant damage.
However, when AI is interposed between the relevant act or omission and the damage, the characteristics of certain AI systems, such as opacity, autonomous behaviour and complexity, can make it very difficult, if not impossible, for the injured person to meet the required burden of proof.
It may be very difficult to prove that a specific input, for which the potentially liable person is responsible, caused a specific AI system output, which led to the damage.
Ultimately, if the challenges of AI make it too difficult to obtain redress for damage, there will be no effective access to justice. In turn, this could lead to a lower level of societal acceptance and trust in AI and impede the transition to the digital economy. AI use is also seen as a critical enabler for reaching sustainability goals of the European Green Deal.
What will the Directive do?
The Directive will ease the burden of proof on an injured party by introducing a rebuttable “presumption of causality” in respect of the damage concerned, once certain conditions are satisfied. The application of the presumption may vary depending on the circumstances of the case.
In addition, there will be harmonised rules on the preservation and disclosure of evidence by providers or users of high-risk AI systems.4 This can include non-parties if the plaintiff has already undertaken all proportionate attempts at gathering the relevant evidence from the defendant.
The disclosure will be subject to appropriate safeguards to protect sensitive information, such as trade secrets. Any orders made by a court here should be “necessary and proportionate” in the circumstances. Blanket requests for information will be impermissible.
Where a defendant is the subject of an order here but does not comply, the court should presume non-compliance with the relevant duty of care that the requested evidence was intended to prove. The defendant should be able to rebut that presumption.
The Directive follows a minimum harmonisation approach here. Plaintiffs will still be able to invoke more favourable rules of national law, if available.
Where will the Directive apply?
The new rules will apply to claims brought by any natural or legal person against any person for fault that influenced the AI system, which caused the damage. Damage can be any type of damage recognised under national law, including resulting from discrimination or breach of fundamental rights like privacy.5 Subrogated claims and representative actions will also be possible.6
What will the Directive not do?
The Directive will not harmonise general aspects of civil liability, which may be regulated in different ways in Member States. For example, the definition of fault or causality, the different types of damage that give rise to claims for damages, the distribution of liability over multiple tortfeasors, contributory conduct, calculation of damages or limitation periods.
The Directive does not harmonise national laws in relation to the burden or standard of proof, except where it lays down certain presumptions, as set out above.
It will not affect any rights, which an injured person may have under national rules implementing the Product Liability Directive. It will also not cut across liability rules in the transport sector, the Digital Services Act or GDPR. The Directive does not apply to criminal liability.
Transposition and future developments
As things stand, Member States will have two years to transpose the finalised Directive. It will only apply to damage that occurs after the date of transposition.
This may not be the last legislative intervention as regards AI liability. Following stakeholder consultation, the Commission has set out its plans for a staged approach. The Directive is seen as the first “minimally invasive” stage. A second stage will involve assessing the need for more stringent or extensive measures.
To this end, the Directive provides for a monitoring programme to provide the Commission with information on incidents involving AI systems. A targeted review will assess whether, having regard to certain factors such as risk, additional measures such as a strict liability regime or mandatory insurance for operators should be put in place.
Comment
The proposed legislation will now move through the EU legislative process and may be amended along the way, particularly given its interaction with and reliance on the AI Act. While Member State implementation still looks some distance off, those potentially caught by the legislation, such as providers and users of AI systems, should have it on their radar now.
- Proposal for a Directive of the European Parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive) COM (2022) 496 final.
- See Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain union legislative acts COM (2021) 206 final. See also, our related briefing here.
- Fourth Presidency compromise text from the Council of the European Union dated 19 October 2022 and showing various iterations of the definition as of that date. Available here at page 48.
- See Article 3 of the proposed AI Act for definitions of “provider” and “user” and Article 6 in relation to “high-risk AI systems”.
- The Explanatory Memorandum explains that the Product Liability Directive covers a producer’s no-fault liability for defective products, leading to compensation for certain types of damages, mainly suffered by individuals. In contrast, the AI Liability Directive deals with AI systems and covers national liability claims mainly based on the fault of any person with a view of compensating any type of damage and any type of victim, depending on the types of compensation available at national level. There is also a proposal for a revision of the Product Liability Directive, which will deal with AI-enabled products. Together with the proposed AI Liability Directive, this forms part of a package adapting liability rules to the digital age and AI. There is alignment between these two complementary instruments with are intended to form an overall effective civil liability system.
- Directive (EU) 2020/1828 of the European Parliament and of the Council of 25 November 2020 on representative actions for the protection of the collective interests of consumers and repealing Directive 2009/22/EC OJ L 409, 4.12.2020, p. 1 will be amended to facilitate this.
This document has been prepared by McCann FitzGerald LLP for general guidance only and should not be regarded as a substitute for professional advice. Such advice should always be taken before acting on any of the matters discussed.
Select how you would like to share using the options below