Written by: Jùlia Castro, Working Group on Digital Policy

Edited by: Beatriz Raichande

Executive summary 

This paper offers an overview of how AI might automate aspects of dispute resolution – Alternative Dispute Resolution (ADR) and Online Dispute Resolution (ODR) – by examining, at a high level, both the potential benefits and the practical and ethical challenges that arise. By looking into the EU ADR and ODR regulations and the AI Act, this paper underscores the necessity for a thoughtful, cautious approach in implementing AI, emphasizing the importance of human oversight, transparency and confidentiality. While AI’s capabilities hold promise, this paper concludes that, in its current state, the automation of justice should be approached with caution and limited in its scope, ensuring that the integrity of the dispute resolution process remains intact.

  1. Introduction 

    The increasing role of artificial intelligence (AI) in legal processes is reshaping the landscape of dispute resolution. AI’s ability to interrogate arguments and evidence, identify gaps or weaknesses and predict the outcomes of a dispute presents an exciting opportunity to enhance the efficiency and effectiveness of legal procedures. Specifically, in the world of Alternative Dispute Resolution (ADR), AI could significantly alter traditional mechanisms such as negotiation, mediation, expert determination and arbitration. By automating certain aspects of these processes, AI could have the potential to accelerate dispute resolution, encourage earlier settlements and streamline the identification of key issues in contention. 

    Arbitral tribunals, in particular, benefit from broad discretionary powers in managing proceedings, as set out in national laws and procedural rules like the UNCITRAL Arbitration Rules. Article 17(1) of the UNCITRAL rules, for example, emphasises the tribunal’s authority to structure the arbitration process in a manner that ensures equality, fairness, efficiency, and avoids unnecessary delay and expense. However, while this discretion facilitates innovation, it also raises several important considerations, particularly in relation to the use of AI in arbitral proceedings. The question remains whether tribunal discretion over procedural management can sufficiently address key concerns such as transparency, the disclosure of AI’s role, and ensuring clear expectations for how AI will be integrated into the process.

    Negotiation, mediation, and arbitration are the most common ADR methods, with principled-integrative negotiation being a widely used approach. This method promotes fairness and justice by allowing parties to stay in control, actively participate, and make informed decisions to reach win-win solutions.

    Ultimately, this exploration of AI in ADR is framed within the context of existing regulations, such as the General Data Protection Regulation (GDPR) (Regulation (EU) 2016/679) and emerging frameworks like the EU’s AI Act. While these regulatory structures provide some guidance, the fast-evolving legal landscape surrounding AI in dispute resolution calls for ongoing attention to how parties, counsel and tribunals interact with this technology, both now and in the future.

    As has been noted by experts in the field (Solarte-Vasquez & Hietanen-Kunwald, 2020), procedural fairness and efficiency are pivotal concerns in international arbitration. Many advancements in ODR have been based on the functions of traditional ADR entities, with the goal of replicating the dispute resolution methods of conventional ADR processes and offering online alternatives. 

    The ODR system itself has been primarily developed to meet the needs of ADR, and many of the developments in relation to ODR are based on the functions that are inherent in traditional ADR entities, but which have been transferred to an online environment. 

    There are different views within the arbitration community: while the International Chamber of Commerce (2018), for example, has considered that AI will be a good support for dispute resolution, others are concerned about its challenges in terms of privacy and accuracy, like Yuval Noah Harari (2023) who has argued that AI has “hacked the operating system of human civilisation”. 

    2) Background

    a. ADR/ODR framework in the EU 

        The EU has implemented instruments such as Directive 2013/11 (Consumer ADR Directive) and Regulation 524/2013 (Regulation on consumer ODR) to ensure the public’s access to convenient and effective online dispute resolution. EU Directive 2013/11 marked a turning point in consumer protection, as it began to guarantee consumers the opportunity to submit a dispute regarding a product or service against an EU supplier to an Alternative Dispute Resolution body (Directive 2013/11/EU on ADR).

        The new European legislation does not exhaustively regulate the development of ODR in Europe. The task of EU ODR development, and its implementation, has been technically entrusted to the Member States and to the certified ADR entities in each Member State. The Directive has established only minimal harmonisation criteria, which Member States may modify or supplement within their national legal frameworks. 

        The procedure of the EU ODR Platform involves several steps: consumers submit their complaints through an online form, triggering an automatic notification to the seller along with an invitation to propose an ADR authority.

        As per Regulation (EU) 524/2013, the parties then have thirty days to agree on an appropriate ADR body. If the online vendor fails to propose a dispute resolution body or if the parties cannot reach an agreement within this time frame, the complaint is automatically closed. If the consumer agrees to the proposed ADR entity, the ODR Platform automatically transfers the complaint to the accepted body. Once the ADR body receives all relevant documents from both parties, it has ninety days to review the complaint and suggest a resolution. Nevertheless, the involvement of a certified third party has transformed this procedure into a non-ADR process, incorporating a new functionality called “direct talks” with the purpose of enhancing the system as many complaints filed were failing. According to the 2021 Report of Functioning ODR Platform, in 2020, an average of 5% of EU consumers filed their complaints with an ADR body, and only 8% would contact an ADR body if they had a problem in the future (European Commission, 2021).

        Some experts have indicated that the endeavour to transfer traditional alternative dispute resolution methods to online platforms has encountered significant difficulties and that ODR has rapidly evolved to establish distinct processes that differ from the conventional functions of traditional dispute resolution mechanisms (Katsh & Rabinovich-Einy,  2017).

        In conclusion, the European ADR/ODR regulatory framework can be considered a key pillar in the development of ODR in Europe. It not only directly supports ODR growth through the EU platform but also requires Member States to fulfil obligations that extend beyond individual dispute resolution, fostering broader ODR development across the continent. However, for ODR to reach its full potential, the EU must take an active role in guiding and advancing this initiative.

        b. The EU AI Act 

          The EU AI Act (Regulation (EU) 2024/1689), adopted by the European Parliament on 13 March 2024, has been the landmark legislation that lays down harmonised rules on AI. While the AI Act exists and gives an overview on how AI affects court proceedings, there is little guidance for arbitration.

          Under the AI Act’s risk-based approach, which classifies economic activities based on the potential harm caused by AI systems (Recital 26), arbitration falls within the ‘high-risk’ category. According to Annex III, Article 8(a), AI systems used by or on behalf of judicial authorities to research, interpret facts and law, and apply the law (or those used similarly in alternative dispute resolution) are considered high-risk. Furthermore, Recital 611 specifies that AI systems employed by alternative dispute resolution bodies should also be classified as high-risk when their decisions produce legal effects for the parties involved. Therefore, AI used in alternative dispute resolution, including arbitration, may be classified as “high-risk” if it influences legal outcomes. This classification could impose obligations on arbitrators using AI, such as monitoring its operation and ensuring human oversight. 

          Looking into the exemptions under Article 6(3), AI systems may be exempt from the high-risk classification if they do not pose a significant risk to “health, safety, or fundamental rights”, including by not materially influencing decision-making. Exemptions apply when AI is used for procedural tasks (in arbitration could be tracking deadlines or payments), enhancing human decisions (e.g. refining finalised orders), pattern detection without influencing outcomes (e.g. checking award consistency) or preparatory tasks (e.g. summarising case facts). These are,  in a glance, administrative tasks, and may fall outside the  scope of the high-risk classification. While this helps to clarify the regulation’s scope, the AI Act primarily focuses on protecting natural persons, leaving its application to legal entities uncertain. A broad interpretation may be necessary to determine whether individuals behind a legal entity could be affected by an arbitral decision.

          The Act also has a broad territorial scope, applying not only within the EU but potentially beyond. If an AI-assisted arbitration decision impacts an EU-based party or is enforced in the EU, the Act’s provisions could apply—even if the arbitration itself took place elsewhere. However, its practical application remains unclear, particularly regarding exceptions for AI used in limited procedural tasks or as a support tool rather than a decision-making system. Nevertheless, part of the international arbitration community views it as beneficial for improving procedural efficiency in arbitration (Rauch, 2024). The reasoning is based on the fact that if AI systems used by ADR bodies are classified similarly when their outcomes have legal implications for the parties (as in arbitral proceedings), AI will be limited to administrative support. This will reinforce that final decision-making remains in human hands.

          In conclusion, arbitral tribunals should carefully assess the applicability of the EU AI Act to their AI systems and consider the associated obligations and legal consequences. Given the Act’s risk-based classification, AI systems used solely for preparatory tasks that do not involve determining or interpreting facts and legal provisions are not considered high-risk. Similarly, AI systems assisting in these determinations may also avoid high-risk classification if they meet specific conditions. However, all other AI applications in arbitration will be categorised as high-risk. Therefore, tribunals must carefully evaluate their AI usage to ensure compliance and mitigate legal risks.

          3) Problem description: AI in international dispute resolution

            There are different ways in which AI works for arbitration, as well as for other disciplines, such as to assist in case research, for data analysis or to help streamline the arbitration process. However, the question does not only remain whether AI could eventually replace human arbitrators entirely, but also if in the process of using AI to support arbitration, the dispute resolution process would lose its integrity and essence. 

            Key challenges include the potential loss of legal evolution, confidentiality, lack of human discretion and empathy as well as public trust. In this section we will analyse some of the challenges faced, especially regarding confidentiality and decision making, trying to include all the relevant elements that generate ethical debates. 

            a. Confidentiality 

              Disputes always involve large volumes of data, which sometimes are difficult to manage for people. Generative AI tools, on the other side, are trained on vast data sets, and have exceptional capacity for natural language processing which allows the tool to rapidly search, compare, etc. Nevertheless, it is not true that in the context of international arbitration, we are managing “big data” sets, mainly because confidentiality requirements in arbitration limit access to arbitral awards and materials. 

              To effectively use AI for ADR, models need to be trained on extensive datasets, including thousands of transcripts from real arbitration proceedings, as well as arbitration rules, awards, and decisions. However, a major challenge arises due to the confidential nature of arbitration. Since many arbitration cases are private, the amount of publicly available data is limited. Even though large language models (LLMs) do not necessarily require vast amounts of data to function, restricted access to arbitration-related information could affect their accuracy and reliability. With a limited dataset, AI models may struggle to generate precise and well-informed outputs, potentially reducing their effectiveness in arbitration-related tasks.

              Other questions have also been raised about the transparency and control of arbitral data and algorithms. We can think about possible risks regarding confidentiality and personal data protection, like having massive numbers of disputes handled by private entities with substantial market power such as a mega-platform which escalates privacy concerns. This also raises concerns about the impact of AI decision-making and the need for increased transparency, which will be explored in the next section. 

              There are also open questions regarding the usage of AI-powered technologies in ODR, particularly their potential access to private data and its use beyond originally intended purposes. Private entities that use data to investigate patterns of dispute and evaluate the effects of procedural choices in their dispute resolution processes can also  potentially misuse the data for discriminatory or commercial purposes, resulting in violations of consumer privacy and other related rights.

              To sum up, AI used in ODR/ADR can breach confidentiality because it requires access to sensitive arbitration data, which is traditionally private. These issues affect confidentiality in the sense that dispute resolution may require access to sensitive arbitration data, which while being misused beyond its intended purpose, AI systems would inadvertently reinforce biases or discriminatory practices based on the data they process, and therefore, parties involved in disputes may face unfair treatment. 

              b. Decision making bias 

                Zeleznikow (2002) describes decision-making as a process of generating and managing knowledge. In this context, the purpose of a decision support system is to assist users in handling knowledge effectively. Such a system enhances users’ ability to represent and process information while supplementing their knowledge management skills with computer-based tools. A decision support system functions by storing, processing, and presenting relevant knowledge to aid decision-making.

                In an earlier study, Lodder & Zeleznikow (2005) proposed a three-step approach for developing ODR systems:

                1. Assessing potential outcomes – The system should first provide feedback on what might happen if negotiations fail, commonly known as the “Best Alternative to a Negotiated Agreement” (BATNA).
                2. Facilitating resolution – The system should then help resolve disputes using argumentation and dialogue techniques.
                3. Decision analysis and trade-offs – For unresolved issues, the system should apply decision analysis techniques and suggest compensation or trade-off strategies to aid resolution.

                If step three does not lead to an acceptable outcome, the system allows the disputing parties to return to step two and repeat the process until either a resolution is reached or a stalemate occurs. A stalemate happens when no further progress can be made between steps two and three. However, even in such cases, ADR methods—such as blind bidding or arbitration—can be applied to a smaller set of issues. This approach helps reduce time and costs while encouraging parties to reconsider whether pursuing their initial demands is worthwhile (Lodder & Zeleznikow, 2005).

                The bias challenge comes basically from the fact that AI decision makers will not be able to provide reasons in the same manner as humans and thus cannot fulfill the fundamental prerequisites of justice. If we would be revealing reasonings of the awards from arbitrators, bias could be demonstrated if consistent patterns were exposed. 

                4) Recommendations 

                  As AI becomes increasingly integrated everywhere, it is no longer a question of whether to use AI in arbitration but rather how to do so responsibly. The following recommendations outline key principles to ensure AI’s integration into dispute resolution remains transparent and human-centred, drawing from the Silicon Valley Arbitration and Mediation Center (SVAMC) Guidelines on the Use of Artificial Intelligence in Arbitration (SVAMC, 2024) and aligning with the transparency, data protection, and accountability requirements of the EU AI Act (Regulation (EU) 2024/1689) and the GDPR (Regulation (EU) 2016/679).

                  Decision-Making

                  • AI should not replace the independent analysis of the facts, law, and evidence required by arbitrators.
                  • AI tools should only be used for procedural support, such as case management, legal research assistance, or administrative tasks—functions that, under Article 6(3) of the AI Act, may be exempt from “high-risk” classification.
                  • Arbitrations must retain full control over all fact-finding, legal interpretation and final decision-making. 

                  Confidentiality and data protection

                  • AI tools used in ADR must comply with GDPR and the AI Act provisions regarding data security and privacy.
                  • Regulations should ensure that AI-powered platforms do not compromise arbitration confidentiality, particularly regarding the storage, use, and retrieval of sensitive arbitration data.
                  • Policies should regulate recording, storage, and use of AI prompt and output histories, ensuring that confidential case information is neither stored nor repurposed beyond its intended use.

                  Disclosure of AI use

                  • The AI Act transparency requirements (Articles 13–14) should be upheld in arbitration, ensuring that parties are informed when AI tools are used.
                  • Disclosure of AI use should be determined on a case-by-case basis, balancing due process rights, confidentiality, and privilege considerations.
                  • When AI tools assist in procedural or analytical tasks, arbitrators and parties must assess whether disclosing their use is necessary to maintain procedural fairness.

                  Duty of competence and diligence 

                  • Parties must verify AI-generated outputs used in submissions, ensuring they are factually and legally accurate.
                  • Party representatives shall bear responsibility for any uncorrected errors, misleading content, or inaccuracies produced by AI tools in arbitration proceedings.

                  Respect for due process 

                  • Arbitrators must not rely on AI-generated information outside the case record without disclosing it to the parties and allowing for comment.
                  • AI-generated materials must be independently verifiable—if an AI tool cannot cite legitimate, reviewable sources, arbitrators must not assume their accuracy.
                  • AI should never be used in a way that compromises procedural fairness or restricts parties’ rights to a fair hearing.

                  5) Conclusions

                  The integration of AI into ODR represents the next phase in the evolution of dispute resolution systems, offering opportunities to enhance efficiency. While Member States and supranational entities embrace AI to streamline their processes, they must also safeguard core principles such as procedural justice, fairness and transparency. But how can this be achieved? The EU, in particular, has prioritised embedding transparency and legitimacy into AI-driven systems to build public trust and maintain its leadership in consumer redress. For ODR to truly revolutionise dispute resolution beyond traditional court systems, it should be seen as an enhancement of ADR with both approaches working in synergy. 

                  Using AI for ADR/ODR can facilitate decision-making and improve user interactions through intelligent interfaces, yet its adoption raises concerns about bias, fairness, and transparency. Since digital tools are not neutral, their design can influence outcomes, being necessary to implement oversight to ensure ethical and unbiased implementation. While AI itself may not drastically alter the core challenges of the EU’s dispute resolution systems, integrating it effectively requires revisiting existing regulations and directives. 

                  6) Bibliography 

                  Abrahams, B., Bellucci, E., & Zeleznikow, J. (2012). Incorporating fairness into development of an integrated multi-agent online dispute resolution environment. Group Decision and Negotiation, 21(1), 3-28.

                  Directive 2013/11/EU of the European Parliament and of the Council of 21 May 2013 on alternative dispute resolution for consumer disputes and amending Regulation (EC) No 2006/2004 and Directive 2009/22/EC (Directive on consumer ADR)

                  Esteban de la Rosa, F. (2018). ADR-rooted ODR design in Europe: A bet for the future. International Journal of Online Dispute Resolution, 5(1-2), 154-162. https://doi.org/10.5553/IJODR/235250022018005102014

                  Falkiewicz, A. (2023). Artificial intelligence in arbitration. Arbitras The Hague Blog. https://www.arbitras.org/blog/2020/10/9/artificial-intelligence-in-arbitration

                  Fortese, F., & Hemmi, L. (2015). Procedural fairness and efficiency in international arbitration. Groningen Journal of International Law, 3(1), 1–15. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2611337 

                  Harari, Y. N. (2023). Yuval Noah Harari argues that AI has hacked the operating system of human civilisation. The Economist. https://www.economist.com/by-invitation/2023/04/28/yuval-noah-harari-argues-that-ai-has-hacked-the-operating-system-of-human-civilisation  

                  International Chamber of Commerce. (2018). ICC policy statement on artificial intelligence. https://www.icc-austria.org/downloads/ICC-policy-statement-on-Artificial-Intelligence.pdf

                  Katsh, E., & Rabinovich-Einy, O. (2017). Digital justice: Technology and the Internet of disputes. Oxford University Press.

                  Mills, T., &  Shanker, M. (2024). New frontiers: Regulating artificial intelligence in international arbitration. Norton Rose Fulbright Knowledge Publications. https://www.nortonrosefulbright.com/en/knowledge/publications/3cb82b55/new-frontiers-regulating-artificial-intelligence-in-international-arbitration

                  Mutnick v. Clearview AI, Inc., No. 1:20-cv-00512 (U.S. District Court for the Northern District of Illinois, Aug. 12, 2020). https://www.courtlistener.com/docket/17018607/86/mutnick-v-clearview-ai-inc/ 

                  ODR Info. (n.d.). Home. https://odr.info/

                  Rauch, T. (2024). AI in IA: To what extent and capacity can artificial intelligence assist in international arbitration procedures and proceedings? University of Cumbria. https://doi.org/10.2139/ssrn.4725339

                  Regulation (EU) No 524/2013 of the European Parliament and of the Council of 21 May 2013 on online dispute resolution for consumer disputes and amending Regulation (EC) No 2006/2004 and Directive 2009/22/EC (Regulation on consumer ODR)

                  Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation)

                  Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) 

                  Rhim, Y.-Y., & Park, K. (2019). The applicability of artificial intelligence in international law. Journal of East Asia & International Law, 12(1), 7-30.

                  Ruiz Garrido, C., & Uría, A. (2023). Artificial intelligence and international arbitration: Uses and challenges. Uría Menéndez. https://www.uria.com/en/publicaciones/8501-artificial-intelligence-and-international-arbitration-uses-and-challenge#a

                  Scherer, M. (2019). International arbitration 3.0 – How artificial intelligence will change dispute resolution. Austrian Yearbook of International Arbitration 2019. Available at SSRN: https://ssrn.com/abstract=3377234

                  Scherer, M. (2024). We need to talk about … the EU AI Act! Kluwer Arbitration Blog. https://arbitrationblog.kluwerarbitration.com/2024/05/27/we-need-to-talk-about-the-eu-ai-act/

                  Solarte-Vasquez, M. C., & Hietanen-Kunwald, P. (2020). Transaction design standards for the operationalization of fairness and empowerment in proactive contracting. International and Comparative Law Review, 20(1), 180–200. https://doi.org/10.2478/iclr-2020-0008

                  Sternlight, J. R. (2005). Creeping mandatory arbitration: Is it just? Stanford Law Review, 57, 1631-1650.

                  Silicon Valley Arbitration & Mediation Center, Inc. (2024). SVAMC guidelines on the use of artificial intelligence in arbitration. SVAMC-AI-Guidelines-First-Edition.pdf

                  United Nations Commission on International Trade Law. UNCITRAL Arbitration Rules. 2021.

                  Leave a Reply

                  Your email address will not be published. Required fields are marked *

                  You may also like