
Written by: Julia Arenos Karsten, Working Group on Digital Policy
Edited by: Kristina Welsch
Executive summary
The European Union has undertaken multiple efforts to ensure that AI systems are developed and deployed as tools that serve people and protect human rights. However, the current Artificial Intelligence (AI) regulatory framework primarily focuses on risk mitigation and prevention, offering little to no measures to compensate victims when risks materialise, and harm occurs. As a result, EU citizens remain unprotected and lack accessible mechanisms to report harm, seek protection, and claim compensation.
This policy brief examines the existing AI regulatory framework surrounding its liability rules, identifying gaps and areas for improvement to ensure effective protection for EU citizens in cases of harm. The decisions made by the new Commission in this area will be crucial for closing the gap and ensuring fair compensation for victims when AI systems cause harm while fully upholding and respecting human rights.
1. Introduction
AI has increasingly been becoming part of our daily lives and shaping the way governments and civil agents operate. While AI is expected to improve public services and well-being, strengthen democracy and contribute to crime prevention, it also involves notable threats from AI-driven phishing attacks and algorithm biases to potential data breaches (EU Parliament, 2025). Recognising the rapid pace of AI’s development and foreseeing the potential consequences of their unregulated effects, the European Commission (EC) has introduced the AI Act, effective from August 1, 2024, aiming at fostering innovation while safeguarding fundamental rights. To ensure this, a risk-based approach has been adopted in the Regulation. Thus the Act identifies four types of risks (unacceptable, high, limited, and minimal risks) and stipulates that by August 2026, all actors within the AI value chain – such as providers, deployers, distributors, importers, and manufacturers – will have to fully comply with a set of obligations based on their AI system’s risk level. Additionally, specific provisions are outlined for general-purpose AI systems, such as ChatGPT.
This policy brief, however, will exclusively focus on high-risk AI systems defined by Article 6 of the AI Act as those “posing significant risks of harm to health, safety or fundamental rights” (Art. 6, AI Act). These are expected to be used mainly across eight key domains: biometrics, critical infrastructure, education and training, employment, access to essential services, law enforcement, migration and border control, and democratic processes. Within this scope, most of the obligations set out by the EC are oriented towards mitigating, preventing and addressing risks, with the ultimate goal of preventing future harm. This EU strategy is referred to here as a risk-preventive approach. This approach has been essential in positioning the European Union as a world-class hub for a human-centric and trustworthy AI governance. However, addressing risks does not inherently involve victim compensation, which is the case of the EU, where little to no obligations are in place in the event of damage for the civilian population, leaving them unprotected and without means to claim compensation. The absence of comprehensive regulatory framework creates a gap that hinders the effective roll-out of AI across Europe.
The first section of the paper delves into the existing EU-obligations for high-risk AI systems, both preventive and victim compensation ones, referred to here as AI liability rules. The second section provides an in-depth analysis of existing EU AI liability rules, focusing on the EC’s proposal for an AI liability directive which represents an opportunity for the new Commission to close the existing gap. The policy brief concludes with a set of recommendations for strengthening EU liability rules, aiming to create a robust AI legislative framework effectively protecting the human integrity and well-being of all EU citizens, residents, and legal entities.
2. Background and problem description
2.1. Risk-preventive measures in place
The AI Act provisions will become fully applicable by August 2026, with specific stipulations set to take effect earlier, by February 2025 (Article 113, AI Act). Among these early obligations, the requirements outlined in Article 4 directly affect high-risk AI systems, mandating that AI providers will have to offer AI literacy training to their staff and other persons involved in the use and operation of their AI systems. This initial responsibility already reflects the AI Act’s risk-preventive approach, as AI literacy is expected to provide employees with the needed tools to effectively prevent data breaches, secure sensitive information, and identify potential data biases (Gates, 2023). Following this early phase of implementation, another set of measures will become applicable by August 2026. These measures will require AI system providers to ensure compliance across six key domains.
Obligation | Description |
To establish a risk management system | Providers shall establish a risk management system which becomes a tool where potential high risks are foreseen and documented, as well as its respective targeted measures to address those risks (Art. 9, AI Act). |
To establish a quality management system | Providers shall ensure compliance and accountability, including compliance with cybersecurity requirements as per Article 15. Such a system will support documentation, conformity assessments, and compliance procedures throughout the entire AI system lifecycle (Art. 17, AI Act). |
To have data governance and training in place | Providers shall ensure that the data is trained, validated, and tested using high-quality criteria to minimize errors and prevent biases (Art. 10, AI Act). |
To produce documentation and record keeping | Providers shall produce technical documentation and record keeping for their high-risk AI systems before they enter the market, as well as to keep the documentation for 10 years. Along the same lines, a record of the lifecycle events of the AI system shall be kept for traceability and risk prevention purposes (Art. 11 / 12, AI Act). |
To ensure transparency and human oversight | Providers shall create clear instructions on system operation, enabling natural persons to understand the system and better identify potential risks when human oversight measures are implemented (Ar5. 13/14, AI Act). |
To report serious incidents | Providers shall report serious incidents to the market surveillance authorities of the Member States where the incident occurred. This includes a written report together with a risk assessment and proposed corrective actions (Art. 73, AI Act). |
The analysis of these obligations shows that the quality management system, data governance, documentation and recordkeeping, as well as transparency and human oversight, adopt a purely preventive approach. These measures share a common focus on compliance with regulatory standards, such as cybersecurity, data quality, and conformity assessment requirements, aiming to prevent potential risks. Traceability and transparency are also a cornerstone achieved through management systems, documentation, and clear instructions. The overall objective is to ensure that AI systems are well-understood, auditable, and evaluated in order to sustainably and effectively mitigate risks.
Contemporarily, the two remaining obligations, the risk management system and the reporting of serious incidents, adopt an approach leaving room for risks to be addressed. However, it is important to note that addressing risks does not necessarily imply victim compensation, and in some cases, risks are managed only on a substantive level, with little to no attention given to the address the collateral damages caused. This is the case of these two obligations, for whom, although corrective measures are proposed to address risks, there is no explicit mention of measures taken to repair damages or compensate victims.
Overall, the AI Act lacks provisions on liability rules that guide entities on what to do after damage or harm has occurred and how to effectively compensate victims (Torfs, & Jacobs, 2024). By not adding liability provision within the Act, the European Union left a gap where victims remain unprotected. Furthermore, while trying to rely upon the existing EU liability instruments the European Commission failed to recognise that the existing legal framework remains insufficient to ensure effective victim compensation in the AI domain.
2.2. EU IA liability rules in place
With respect to damage caused by AI systems, the existing EU liability framework consists of the revised Product Liability Directive 85/374/EEC (PLD), complemented by existing national liability rules. Before December 2024, and prior to the new PLD taking effect, victims harmed by AI systems were limited to recourse through national fault-based liability regimes. This means that the victim had to prove the existence of damage, fault, as well as the link between damage and fault (EPRS, 2023). Due to the inherent features of AI systems, such as opacity, autonomous behaviour and opacity, it was very difficult for victims, and in some cases, even impossible, to identify and prove fault (EPRS, 2023). These limitations were addressed by some national liability regimes allowing to claim damage without the need to prove fault (“strict liability rules”). Nonetheless, this approach still presented challenges, such as the risk of potential fragmentation among Member States, which could easily result in diverse judicial interpretations and differing compensation for damages, even when the same AI product causes the same type of harm (EPRS, 2023).
Therefore, the PLD was adapted to emerging technologies such as software and AI applications, harmonising strict liability rules, and entered into force the 9th of December 2024 (Barnes & Kelly, 2024). For the first time, individuals harmed by defective softwares, including AI systems, will be able to claim compensation for damages. This applies to various types of harm: not only physical injuries and material losses, but also data loss or mental health impacts. Moreover, claims can be made up to 10 years after the product is put into the market, without the need to prove fault, while the liable person of the defective product will be held liable for any damage caused and the victim compensated (EPRS, 2023).
Currently, the EU liability framework consists of the new PLD, complemented by national liability regimes. However, such a framework is insufficient to ensure fair victim compensation in the AI domain, as numerous gaps, particularly concerning the revised PLD, are present(EPRS, 2023). Firstly, claiming compensation requires a clear contractual link between the victim and the liable party, mainly the producer. Identifying the link is particularly challenging within AI systems, where numerous entities co-exist within the AI value chain (EPRS, 2019). Secondly, the new PLD applies only to natural persons, thus excluding legal entities. This means businesses and other organisations have no legal recourse available when they suffer financial losses or other harm caused by faulty AI systems (EPRS, 2019). Finally, the new PLD’s scope is limited to AI systems as defective products. However, due to the dynamic nature of AI such as continuous learning, identifying a “defect” might be complex (Howarth, Chandler, & Behrendt, 2025).
These shortcomings were identified by the EC, which proposed in September 2022 the AI Liability Directive (AILD) establishing non-contractual civil liability rules to AI systems. However, as part of the new Commission’s work programme adopted on 11 February 2025, the withdrawal of the proposal on the premise that less regulation would foster innovation, has been announced. On the next steps, it was stated that no agreement is foreseen, while “the Commission will assess whether another proposal should be tabled or another type of approach should be chosen” (Andrews, 2025).
3. EU AI Liability Directive
The proposal made by the EC addressed the mentioned shortcomings, mainly by extending the scope to include non-contractual claims for damages, legal persons, as well as other damages than those caused by defective products (AI Liability Directive, 2022). he AILD also aims at harmonising national non-contractual fault-based liability rules. Strict liability rules, where the victim is not required to prove fault are instead not considered in the proposal. During negotiations, the Parliament suggested to include into the directive strict liability rules only for those victims affected by high-risks AI systems, with mandatory insurance (EPRS, 2019). However, in 2022, the EC refused the Parliaments’ proposal as fully reverting the burden of proof was seen by the European Union as something that would negatively affect AI system providers, limiting innovation and commercialisation of AI products (EPRS, 2019). Therefore, under this new liability regime, the victims will still have to prove the caused harm by providing evidence that someone did not comply with the existing rules, and that their non-compliance caused harm. There is however no need to prove how and why an AI system produced a harmful output, and therefore, victims are relieved from demonstrating the inner functioning of AI systems (AI liability directive, 2022). Trying to find a balance between innovation and safeguarding human rights, the EC also introduced both the disclosure of evidence and the presumption of causality into the directive. Under such provisions, the victims will still carry the burden of proof but with these measures, this burden is expected to be reduced and alleviated (EPRS, 2019).
The first provision set out under Article 3, provides persons that have been harmed by high risk AI systems with means to obtain the needed information to prove fault. Therefore, the national courts might request liable parties to disclose the necessary information to the victims. Shall these parties fail to provide the needed information, the so-called presumption of causality applies. Article 4, paragraph 4, notes that if the parties fail to provide the needed information or if it is still very difficult for the victim to prove fault, the court will apply the presumption of causality. This means that it will be assumed that even without evidence, the simple fact that damage and harm has happened implies that the AI system was not compliant with the rules, and therefore, victims shall be compensated (AI liability directive, 2022). Similarly, the AILD establishes provisions that significantly benefit AI operators (referred to here as defendants) whenever harm is caused, primarily by introducing the option for rebuttal. This allows those responsible for the damage to provide evidence that their fault did not cause any harm. If this is proven, the presumption of causality will not apply to the victim, and the victim will not be compensated until fault is established, as per recital 30. Likewise, if the defendant demonstrates that the victim has sufficient evidence and expertise to prove fault, the presumption of causality will also not apply, as per Article 3, paragraph 5.
Overall, the AILD proposed by the Commission is expected to complement the revised PLD, and with it, establish a comprehensive AI liability framework that ensures that when damage has been caused, victims are compensated. Nonetheless, significant shortcomings have been identified, particularly the fact that the directive does not relieve victims from the burden of proof. Under this regime, victims are still required to prove fault and face substantial administrative burdens and costs. While provisions such as the presumption of causality and access to information are in place, they are insufficient to effectively safeguard the rights of victims.
4. Policy recommendations
While discussions on the EU AI Liability Directive were suspended in late 2023, the significant uncertainty surrounding the file was settled when (Bird & Bird, 2024) in February 2025, the new Commission decided to withdraw the proposal while noting that it would assess the best way to proceed. The European Commission should take into consideration the following amendments when adopting the EU AI Liability Directive as the foundation for a new legislative proposal:
- Incorporate strict liability rules for high-risk AI systems into the new proposal: Currently, under the proposed framework, victims of high-risk AI systems are only relieved from proving fault if the damage results from a defect in the AI system, as outlined in the PLD. And even in such cases, victims are still required to prove the defect and the contractual link, leading to potentially complex and costly administrative burdens, such as conducting investigations (EPRS, 2024).Incorporating strict liability rules for high-risk AI systems into the new proposal would significantly alleviate the burden on victims to prove fault. Furthermore, this approach would eliminate existing administrative hurdles, allowing victims to rely on a unified legal framework to seek compensation for damages, without need to prove a defect nor a contractual link.
- The new proposal should be a Regulation and not a Directive: While AI liability rules in the form of a directive would be binding on Member States in terms of the results to be achieved, the choice of implementation methods would remain at the discretion of each Member State. However, for an AI liability regime, a regulation would be more appropriate. A regulation would establish clear and uniform requirements for all Member States, ensuring consistency in how the rules are applied. This is particularly important given the significant scope for judicial interpretation in AI liability cases, as well as the inherent technical expertise gap between victims and AI operators.
- Alternative policy recommendations: Shall the EC does not consider the above-mentioned recommendation in the interest of innovation and commercialisation of AI products, the new proposal, assuming it will be built on the withdrawn EU AI liability Directive, the following amendments shall be considered:
- Understandable disclosure of information for victims: According to the AI liability Directive, Member States are required to ensure that victims have at their disposal all the information to prove fault. A reference should be added noting that the disclosure of information shall be presented in an understandable manner also available for non-experts. With such an addition, it is ensured that all victims have the tools to use and understand the information, regardless of their technical expertise, ensuring equal access to justice for all.
- No allowance for rebuttals: When a victim is given the “presumption of causality,” meaning they do not have to prove fault, the AI operator responsible for the harm should not be allowed to take this protection away. Allowing the operator to argue that their system did not cause the harm or that the victim can prove fault on their own would leave the victim unprotected.
5. Conclusion
This policy brief has brought to light that the current EU AI liability framework presents significant shortcomings. The findings also indicate that the new Commission’s strategy of reducing regulation to foster innovation, particularly in the case of AI liability rules, compromises the protection of human rights. The existing governance, mainly based on national fault-based rules and the Product Liability Directive, places significant administrative and financial burdens on victims harmed by AI systems, making it difficult for them to claim and obtain fair compensation. To address those shortcomings, the Commission proposed the EU AI Liability Directive. However, by withdrawing the existing proposal, the new Commision has missed the opportunity to address the existing gaps, leaving victims without adequate means for compensation.
Based on this situation this policy brief believes that the establishment of a fair and effective EU AI liability framework, especially for victims harmed by high-risk AI systems, shall become a priority for the new Commission, particularly DG CNECT.After analysing mainly the AI Act, it was demonstrated that the EU regulation around AI mainly adopts a risk-preventive approach, with risk prevention and mitigation as its cornerstone.
it became evident that the current provisions are not enough to safeguard the right of the victims to claim and obtain compensation. Especially as victims face considerable hurdles, administrative barriers and high costs when claiming compensation.
Aiming at fulfilling the existing shortcomings, the Commission made a proposal on an EU IA liability directive:while the potential of the proposal is acknowledged, the Commission has not effectively addressed the existing gaps. In its effort to protect the interests of AI providers and manufacturers, and advocate for innovation and commercialisation of AI products, the Commission has failed to ensure that the right of victims to claim and obtain compensation is adequately safeguarded. Furthermore, the recent decision made by the new Commission to withdraw the proposal has erased the little hope the proposal was offering. With appropriate amendments, the proposal could have ensured fair victim compensation.
Based on this situation, This section is specifically targeted at EU policymakers from the new Commission, highlighting the need for a new legislative proposal which takes the previous AI Liability Directive as a foundation. Finally, these policy recommendations aim to contribute to the broader goal of influencing the European Commission’s vision for EU AI liability rules. They seek to raise awareness among high-level decision-makers at the European Commission about the fact that the current framework fails to safeguard fundamental rights, such as the right to an effective remedy as set out under Article 47 of the EU Charter of Fundamental Rights. And that such a situation, ultimately hinders the EU’s broader goal of positioning itself as a global leader in establishing a human-centric and trustworthy AI governance, which shall represent a priority for the European Union given the ongoing technological race in the AI sector.
6. References
A&O Schearman (2024). Zooming in on AI: What are the obligations for “high-risk AI systems”? https://www.aoshearman.com/en/insights/ao-shearman-on-tech/zooming-in-on-ai-10-eu-ai-act-what-are-the-obligations-for-high-risk-ai-systems
Andrews, C. (2025). European Commission withdraws AI Liability Directive from consideration. https://iapp.org/news/a/european-commission-withdraws-ai-liability-directive-from-consideration
Barnes, P., & Kelly, C. (2024). Navigating the NEU Product Liability Directive: Key Changes and Impacts. Clyde & Co. https://www.clydeco.com/en/insights/2024/11/navigating-the-new-eu-product-liability-directive
Bird & Bird (2024). AI as a digital asset: Civil liability regime for AI. https://www.twobirds.com/en/capabilities/practices/digital-rights-and-assets/european-digital-strategy-developments/ai-as-a-digital-asset/ai-as-a-digital-asset/civil-liability-regime-for-ai
Bird & Bird (2025). AI as a digital asset: Civil liability regime for AI. https://www.twobirds.com/en/capabilities/practices/digital-rights-and-assets/european-digital-strategy-developments/ai-as-a-digital-asset/ai-as-a-digital-asset/civil-liability-regime-for-ai
Braun, M., Vallery, A., & Benizri, I. (2024). What are high-risk AI systems within the meaning of the EU’s AI Act and what requirements apply to them? WilmerHale. https://www.wilmerhale.com/en/insights/blogs/wilmerhale-privacy-and-cybersecurity-law/20240717-what-are-highrisk-ai-systems-within-the-meaning-of-the-eus-ai-act-and-what-requirements-apply-to-them
Coragio, G. (2024). Why is an AI training on your employees essential and mandatory? Gaming Tech Law. https://www.gamingtechlaw.com/2024/08/ai-training-employees-essential-mandatory/
Erb, H., Kaztaridou, A., & Dante De Falco, F. (2024). EU AI Liability Directive on hold: what lies ahead? Linklaters https://www.linklaters.com/en/insights/blogs/productliabilitylinks/2024/june/eu-ai-liability-directive-on-hold
European Commission. (2022). Proposal for a DIRECTIVE OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive). https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52022PC0496
European Commission. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act). OJ L, 2024/1689, 12.7.2024, ELI: http://data.europa.eu/eli/reg/2024/1689/oj
European Commission. (2024). Directive (EU) 2024/2853 of the European Parliament and of the Council of 23 October 2024 on liability for defective products and repealing Council Directive 85/374/EEC . OJ L, 2024/2853, 18.11.2024, ELI: http://data.europa.eu/eli/dir/2024/2853/oj
European Commission. (2020). Liability for Artificial Intelligence and other emerging technologies. Report from the expert group on liability and new technologies. https://www.europarl.europa.eu/meetdocs/2014_2019/plmrep/COMMITTEES/JURI/DV/2020/01-09/AI-report_EN.pdf
European Parliamentary Research Service. (2023). New Product Liability Directive. https://www.europarl.europa.eu/RegData/etudes/BRIE/2023/739341/EPRS_BRI(2023)739341_EN.pdf
European Parliamentary Research Service. (2024). Proposal for a directive on adapting non-contractual civil liability rules to artificial intelligence. https://www.europarl.europa.eu/RegData/etudes/STUD/2024/762861/EPRS_STU(2024)762861_EN.pdf
European Parliamentary Research Service. (2023). Artificial intelligence liability directive. https://www.europarl.europa.eu/RegData/etudes/BRIE/2023/739342/EPRS_BRI(2023)739342_EN.pdf
European Parliament. (2025). Artificial intelligence: threats and opportunities. https://www.europarl.europa.eu/topics/en/article/20200918STO87404/artificial-intelligence-threats-and-opportunities
Gates, B. (2024). AI is about to completely change how you use computers. Gates Notes. https://www.gatesnotes.com/AI-agents
Howarth, Chandler, & Behrendt. (2025). AI liability – who is accountable when artificial intelligence malfunctions? https://www.taylorwessing.com/en/insights-and-events/insights/2025/01/ai-liability-who-is-accountable-when-artificial-intelligence-malfunctions
Kraul, T., & Maamar, N. (2024). Study of the EU Parliament on AI liability. Noerr. https://www.noerr.com/en/insights/ai-liability-directive-study-of-the-eu-parliament-on-ai-liability
Torfs, W., & Jacobs, E. (2024). AI LIABILITY IN THE EU. Timelex. https://www.timelex.eu/en/blog/ai-liability-eu