Article originally published by the Society for Computers and Law on 22 April 2026.
The Lewis Silkin Dispute Resolution team round up the latest guidance from arbitral institutions on the use of AI
Artificial intelligence has already transformed how the legal profession operates, including in research, data analysis and document preparation through platforms (such as Harvey, used by Lewis Silkin). However, as Dame Victoria Sharp observed in Ayinde v The London Borough of Haringey [2025] EWHC 1383 (Admin), AI "carries with it risks as well as opportunities" and "must take place therefore with an appropriate degree of oversight." This article surveys the AI guidance issued by arbitral institutions and considers emerging developments on AI governance in adversarial proceedings.
AI in Arbitration: Uses and Risks
While the announcement by the American Arbitration Association®-International Centre for Dispute Resolution® (AAA-ICDR) in September 2025 of the deployment of an AI Arbitrator to determine documents-only construction cases heralded a significant and courageous step in the use of AI in arbitration, human validation remains a core component of their AI led process.
Other arbitral institutions have been more cautious with their deployment of the technology.
Current conservative AI applications in arbitration include legal research, document review, drafting submissions, and outcome prediction. Arbitrators may also seek to deploy AI to process information more efficiently, for example, summarising evidence or analysing large volumes of documentation. However, crucially all of these processes are human led.
AI use is not without risk, chief among them, the phenomenon of 'hallucinations', in which AI fabricates case citations or legal propositions that seem real but have in fact been fabricated. Other concerns include confidentiality breaches (particularly where sensitive data is input into publicly available AI tools), potential algorithmic bias embedded in training data, and the 'black box' problem: the difficulty in understanding how an AI system reached a particular conclusion.
What is key is an understanding of these risks so that they can be mitigated with human input and checking while maximising the benefits of AI for all parties.
Guidance from Arbitral Institutions
A growing number of institutions have published guidance on AI use in arbitration. Whilst the specific provisions differ, consistent messaging emerges: AI use is permitted, but human responsibility, oversight and verification are essential and decision-making should not be delegated. This very much mirrors the approach taken by national courts including those of England and Wales and Singapore.
While other institutions such as the LCIA, SIAC, HKIAC and ICC are rightly taking their time to assess the situation, the published approaches (summarised below) can inform practitioners and arbitrators alike about how to take a safe and sensible approach to deploying AI in live matters under any rules.
Chartered Institute of Arbitrators (CIArb)
The CIArb published its Guideline on the Use of AI in Arbitration in March 2025, with an updated version following in September 2025, addressing issues that participants in arbitral proceedings should keep in mind when considering the use of AI. The guideline starts with a focus on benefits and risks of the use of AI in arbitration, then makes general recommendations, including parties and arbitrators making reasonable enquiries about any AI tool before use, weighing up the benefits against the risks, considering applicable laws and regulations and maintaining accountability. It goes on to note that arbitrators are empowered to give directions on AI use and may require disclosure, where use of AI could impact the proceedings. Discussion on the use of AI is encouraged.
The guideline recognises that arbitrators have discretion as to whether to use AI tools to enhance the arbitral process, including its efficiency and the quality of decision-making. Crucially the guideline warns against use of AI in ways that could compromise the integrity of the proceedings or the enforcement of the award, noting that, "Arbitrators should not relinquish their decision-making powers to AI but may use AI to support more accurate and efficient processing of submitted information, always ensuring independent judgement." Individual responsibility, including independent checking and verification, is also emphasised.
The CIArb provides template agreements and procedural orders on the use of AI in arbitration which practitioners can incorporate into their arbitration agreements or procedural frameworks.
Silicon Valley Arbitration & Mediation Center (SVAMC)
The SVAMC published what were the first AI-specific arbitration guidelines in April 2024. It is noted that, "Development of best practices around the use of AI in international arbitration is only beginning, and these Guidelines aim to contribute to that effort." The guidelines provide a principle-based framework for the use of AI tools in arbitration and are intended to assist participants in arbitrations with navigating the potential applications of AI. The guidelines apply to the extent that the parties have so agreed, as ordered by an arbitral tribunal or if an arbitral institution decides to adopt them.
There is emphasis, through guidelines 1, 2, 4 and 5, on the user of an AI tool ensuring that they safeguard confidentiality, understand the uses, limitations and risks of the AI tools they use (as well as techniques for mitigating those limitations and risks), maintain responsibility for the use and output of AI tools, and ensure respect for the integrity of the proceedings and evidence.
Guideline 3 recognises that there is no obligation in the guidelines for disclosure of use of AI to be made. It is acknowledged that, although disclosure may be appropriate in some circumstances, the widespread use and evolving nature of the technology means that setting out criteria for disclosure of AI use might be difficult and create more problems than it solves.
Critically, the guidelines make clear (guideline 6) that whilst AI tools can be used to assist arbitrators, their ultimate decision-making function should not be delegated.
SCC Arbitration Institute (SCCAI)
The SCCAI issued its guide to the use of AI in October 2024. The aim is to provide "flexible guidance ... without imposing specific obligations". The guidance is short and focusses on the importance of maintaining confidentiality and effective human oversight to prevent a decline in quality of arbitral decisions. Arbitral tribunals are encouraged to disclose any use of AI in researching and interpreting facts and the law or applying the law to facts, and should not delegate its decision-making or reasoning.
Vienna International Arbitral Centre (VIAC)
In April 2025, the VIAC issued a note on the use of AI in arbitration proceedings, intended to facilitate discussions between parties. The relatively short guidance, set out under six headings, emphasises the importance of compliance with ethical rules and professional standards when using AI tools, the importance of non-delegation of decision-making authority by arbitrators and confidentiality. It notes that arbitrators have discretion to manage and promote transparency around AI use in the proceedings, including discussing AI at case management conferences, deciding whether to disclose their own use of AI, and reaching agreement regarding AI with the parties. Finally, arbitrators have discretion over whether to require disclosure of AI‑assisted evidence and how to assess its admissibility, relevance and weight.
As indicated, other arbitral institutions have not issued specific guidance on the use of AI in arbitral proceedings, however, that is not to say that this topic is not high on their agenda. For example, the ICC has established a Task Force on AI in International Dispute Resolution to "provide guidance and thought leadership on balancing the opportunity presented by AI with the need to protect the fundamental principles underlying international dispute resolution from the risks associated with its use".
The CJC Consultation on AI in Litigation
It is interesting to compare the approach taken in arbitration to the position which is developing in litigation. In England and Wales, the Civil Justice Council (CJC) is consulting on whether rules are needed to govern AI use when preparing court documents. The CJC's interim report, published in February 2026, has taken a provisional view that (subject to several proposed limited exceptions) provided court documents bear the name of the legal representative who is taking professional responsibility for it, there is no need for any formal rules regarding statements of case produced with the assistance of AI. The limited exceptions relate to (i) the generation of trial witness statements – it is proposed that a rule is introduced requiring a declaration that AI has not been used for the purpose of generating the content of such a statement; and (ii) expert evidence – it is proposed that an expert should explain what substantive use of AI has been made and which tool has been used.
The consultation closes in April 2026, with a final report published thereafter. So far, the proposals represent a fairly 'hands off' approach but go further than the arbitral guidelines issued to date by suggesting specific rules regarding AI use in trial witness statements and expert evidence.
Comment
The emergence of AI-specific guidance from arbitral institutions in a relatively short period demonstrates the profession's recognition of both the transformative potential and the risks associated with AI in arbitration. Despite differences in form and detail, a clear consensus has emerged: AI may be deployed to enhance efficiency and support the arbitral process, but human responsibility, oversight and verification remain paramount, and decision-making must not be delegated to AI systems.
At present, institutional guidance affords practitioners and arbitrators flexibility in how they approach AI use. In litigation, the CJC's consultation on AI in court-based litigation in England and Wales suggests that more prescriptive specific rules might be considered, particularly in areas where the risks are most acute, such as the preparation of witness statements and expert evidence. Arbitration practitioners should pay attention to developments in national courts as these may inform the approach taken by arbitrators.
The consensual and flexible nature of arbitration may favour continued reliance on guidance and party agreement rather than binding rules. Nevertheless, as AI capabilities and uses evolve, arbitral institutions and practitioners will need to remain vigilant, ensuring that the integrity of proceedings and the quality of awards are not compromised in the pursuit of efficiency. The balance between embracing innovation and maintaining appropriate safeguards will be key.
