Arbitration Institutions Deploy AI Guidelines Before Hard Law Steps In
|

Arbitration Institutions Deploy AI Guidelines Before Hard Law Steps In

Most arbitration users did not ask for AI tools, but those tools arrived anyway, built into research platforms, video-hearing software, case-management portals, and even institutional rulebooks. International and domestic alternative dispute resolution (ADR) now sits at the point where efficiency gains meet due-process anxiety, with tribunals under pressure to harness automation without delegating judgment. For lawyers, AI use in arbitration is no longer a thought experiment; it is a governance problem that has to be solved inside real clauses, protocols, and awards.

Why Arbitration Became an Early AI Testbed

Arbitration was already technology-heavy before generative tools arrived, with remote hearings, virtual data rooms, and online-pleading platforms making it easier to run complex, cross-border cases. That infrastructure made it a natural testbed for AI-assisted research, document review, translation, and scheduling. The 2025 International Arbitration Survey by Queen Mary University of London and White & Case reports that 91 percent of respondents expect to use AI for research and data analytics over the next five years, suggesting that tools once treated as experiments are quickly becoming standard infrastructure.

Companion commentary from institutions and policy bodies points in the same direction. The OECD Online Dispute Resolution Framework, the SCC Arbitration Institute’s Guide to the Use of Artificial Intelligence in Cases Administered Under the SCC Rules, and guidance from providers such as AAA-ICDR all describe AI as a core efficiency tool, so long as its use is constrained by safeguards for bias, security, and accountability. Together, these materials frame arbitration as a forum where AI is expected but not yet fully trusted, and where soft-law guidance now does much of the regulatory work.

Soft-Law Frameworks: Guidelines That Arrived Before Hard Law

The first wave of AI–arbitration guidance came from institutions rather than legislatures. In November 2023, the American Arbitration Association–International Centre for Dispute Resolution issued its Principles Supporting the Use of AI in Alternative Dispute Resolution, anchoring AI use in familiar duties of competence, confidentiality, impartiality, independence, advocacy, and process improvement. In May 2025, AAA-ICDR followed up with more detailed AAAi Standards for AI in Alternative Dispute Resolution, organized around ethical-and-human-centric values, privacy-and-security, accuracy-and-reliability, explainability-and-transparency, accountability, and adaptability, which supersede the November 2023 Principles. In March 2025, AAA-ICDR also released Guidance on Arbitrators’ Use of AI Tools that encourages arbitrators to embrace AI technology while adhering to professional obligations under the Code of Ethics for Arbitrators in Commercial Disputes.

On the international-arbitration side, the Silicon Valley Arbitration and Mediation Center published its first-edition Guidelines on the Use of Artificial Intelligence in Arbitration in April 2024, after public consultation. The guidelines push parties and tribunals to understand tool limitations, prohibit using AI to falsify evidence or mislead the tribunal, and emphasize that arbitrators may not delegate their personal mandate to a model-generated analysis of facts or law.

In October 2024, the Stockholm Chamber of Commerce Arbitration Institute released its Guide to the Use of Artificial Intelligence in Cases Administered Under the SCC Rules, focusing on confidentiality, quality-and-integrity, effective human oversight, and avoiding delegation of decision-making to AI tools.

The most comprehensive soft-law instrument so far is the Chartered Institute of Arbitrators’ Guideline on the Use of AI in Arbitration, published March 19, 2025. It is structured in four parts that map benefits-and-risks, general recommendations, arbitrators’ powers to regulate AI use by parties, and the tribunal’s own use of AI, with appendices providing template AI-use agreements and procedural orders that tribunals can adapt.

JAMS has approached the issue from the perspective of AI-related disputes themselves. Its Artificial Intelligence Disputes Clause, Rules and Protective Order, effective June 14, 2024, establishes a specialized framework for disputes involving AI systems, addressing party self-determination, emergency relief, algorithmic transparency, and confidentiality for sensitive system information.

How AI Is Actually Used Inside Arbitration and ADR

Most AI use in arbitration today looks like enhanced law-office automation rather than robo-arbitrators. The QMUL–White & Case survey and related commentary indicate that current applications cluster around legal-and-factual research, document review, translation, and analytics on arbitrator history or case timelines, with adoption expected to grow significantly over the next five years. The core findings are summarized in the 2025 International Arbitration Survey and in White & Case’s companion overview on AI risks the legal sector must consider in dispute resolution.

Practitioners report using platforms such as Lexis+ AI, CoCounsel, and Jus Mundi for legal research, and document-review systems leveraging natural-language processing for e-discovery. These tools promise time savings on labor-intensive tasks such as producing chronologies, summarizing witness statements, and analyzing arbitrator-selection patterns, though concerns about accuracy and transparency persist.

Tribunals and institutions also encounter AI in subtler forms. Case-management platforms may use machine learning to triage filings or schedule hearings, transcription tools may generate draft transcripts from remote-hearing audio, and online-dispute-resolution systems may embed AI to guide parties through structured negotiation. The OECD Online Dispute Resolution Framework treats AI-enhanced ODR as a core access-to-justice tool and emphasizes governance, fairness, accountability, and transparency as baseline principles for digital dispute-resolution mechanisms.

At the same time, there is persistent resistance to using AI for core adjudicative functions. Analysis of the 2025 International Arbitration Survey and follow-on commentary, such as the summary on Conflict of Laws, shows that while a clear majority of respondents expect to use AI in research and analytics, far fewer are comfortable with models evaluating legal arguments or drafting substantive reasoning, and many expect any such use to remain firmly under human control for the foreseeable future.


Cost and Access Implications of AI Tools

The White & Case survey found that 44 percent of respondents identified cost reduction as a principal driver for AI adoption in arbitration, while 26 percent cited access to AI as a means for participants with unequal resources to be more competitive. The promise of efficiency and reduced legal fees must be weighed against the costs of AI tools themselves and the risk that premium platforms create new disparities between well-resourced and under-resourced parties.

While some AI research platforms and document-review systems require substantial subscription fees that smaller firms and individual litigants may struggle to afford, others suggest that AI could level the playing field by making sophisticated analytical capabilities more accessible. The tension between these possibilities raises questions about whether AI will democratize arbitration or simply shift cost barriers from hourly billing to technology licensing.

Due Process, Confidentiality, and the Deepfake Problem

Once AI tools enter the record, due-process and evidentiary questions follow. Commentators have already noted that international arbitration may be particularly vulnerable to AI-generated deepfakes because of lighter discovery, limited coercive powers, and the relative lack of public scrutiny compared with domestic courts. Young ICCA’s 2025 essay on AI, deepfakes, and the right to be heard highlights the risk that manipulated audio or video evidence could undermine party participation and the integrity of remote testimony if tribunals lack robust authentication protocols.

Confidentiality and data-security risks also look different once AI is added to the workflow. The Canadian Bar Association’s Ethics of Artificial Intelligence for the Legal Practitioner warns that lawyers who upload client documents to public or poorly governed systems may breach duties of confidentiality, competence, and supervision, especially when outsourcing legal analysis to opaque models. Arbitration parties often operate across borders, so a single AI misstep can raise overlapping regulatory issues under privacy laws, trade-sanctions rules, and emerging AI legislation.

Tribunals and counsel must also grapple with the conflict-of-laws implications when using AI tools that process data across multiple jurisdictions. Training data, client trade secrets, and e-discovery documents uploaded to AI platforms may fall under different data-sovereignty and privacy regimes (such as the GDPR or various state/provincial laws). This raises questions about whether the AI processing itself breaches confidentiality duties under the governing law of the arbitration or the law of the data’s origin. Arbitrators may need to consider if an award based on AI-processed evidence could face an enforcement risk if a reviewing court finds a breach of public policy due to unauthorized data processing or inadequate security.

Soft-law instruments respond by reasserting human accountability. The SVAMC Guidelines, AAA-ICDR Principles, and CIArb Guideline all insist that no nominal AI “agent” or “assistant” can displace human responsibility for submissions, procedural conduct, or awards, and they encourage tribunals to address AI use explicitly in procedural orders and, where contentious, in the award itself.

Drafting AI-Aware Arbitration Clauses and Protocols

For transactional lawyers, the most immediate task is drafting AI-aware arbitration clauses and case-specific protocols. One practical approach is to treat AI as a procedural topic for the first case-management conference, using the CIArb Guideline’s appendices on AI agreements and procedural orders as a checklist for issues such as disclosure of AI use, acceptable tools, data-handling, and remedies for misuse. A concise overview of this approach is set out in Dentons’ commentary, Generating efficiencies: the new CIArb guidelines on the use of AI in arbitration.

Parties can also build institutional guidance directly into their arbitration agreements. A clause may, for example, confirm that any arbitration will be conducted under the SVAMC AI Guidelines, the SCC AI Guide, or CIArb’s Guideline on the Use of AI in Arbitration, and may incorporate JAMS’ AI Disputes Rules for disputes that turn on model performance or system failure. The JAMS materials include a model AI disputes clause and rules text as well as a tailored protective order for sensitive system information.

Well-structured AI provisions typically address at least five topics. First, disclosure-and-recordkeeping duties around when and how AI tools are used in drafting, analysis, or evidence processing. Second, confidentiality-and-security requirements, including whether data may be processed outside specified jurisdictions or on third-party platforms. Third, limits on AI-generated evidence, with authentication standards for audio, video, or synthetic documents. Fourth, a right to human review and final decision-making, including an explicit statement that arbitrators remain personally responsible for the award. Fifth, remedial powers if AI misuse jeopardizes procedural integrity, including costs consequences or adverse inferences, which are already contemplated in the CIArb framework.

Lessons from Court and Regulatory Guidance

Although arbitration is contractual and private, courts and regulators have done much of the heavy lifting on AI-governance concepts. The Canadian Judicial Council’s 2024 Guidelines for the Use of Artificial Intelligence in Canadian Courts emphasize that AI cannot supplant judges’ exclusive responsibility for decision-making and urge courts to maintain transparency about AI use in administration and adjudication, while guarding against over-reliance on opaque tools.

Professional regulators have likewise issued AI-practice guidance that applies as much in arbitration as in court. The Nova Scotia Barristers’ Society released a 2025 AI Guide for Legal Practices in Nova Scotia that highlights competence, confidentiality, supervision, and client-consent obligations, and explicitly warns that both over-reliance on models and failure to use available tools competently can create professional risk.

UNCITRAL’s 2016 Technical Notes on Online Dispute Resolution, while drafted before the current AI wave, still matter because they articulate core ODR principles of impartiality, independence, efficiency, effectiveness, due process, fairness, accountability, and transparency. Those principles underpin the OECD ODR Framework and many court-and-ADR digitization projects, and they give arbitration practitioners a ready-made vocabulary for explaining why certain AI uses are compatible with party autonomy and enforceability, while others are not.

International Variations: The Asia-Pacific Approach

While North American and European arbitration institutions have led the soft-law response to AI, the Asia-Pacific region shows a range of approaches reflecting different regulatory philosophies. In July 2025, the China International Economic and Trade Arbitration Commission (CIETAC) became the first major arbitral institution in the Asia-Pacific region to publish AI guidelines, focusing on procedural efficiency benefits while identifying safeguards to manage data-security risks and unauthorized disclosure of sensitive information.

Singapore, Hong Kong, and other regional arbitration hubs have taken a principles-based approach, emphasizing risk-based frameworks that align with their existing technology-forward dispute-resolution infrastructure. Singapore’s judiciary issued guidance on generative AI use in court proceedings in September 2024, requiring all court participants using AI to comply with disclosure and verification requirements. These developments reflect the region’s broader AI governance landscape, where China pursues targeted regulation, Japan adopts voluntary standards, and Singapore develops testing frameworks and accountability tools.

The varying approaches across APAC jurisdictions suggest that arbitration clauses involving Asian parties or seats may need to account for different regulatory expectations around AI transparency, data localization, and algorithmic accountability, particularly as China, South Korea, and other jurisdictions move toward binding AI legislation.

The Extraterritorial Reach of the EU AI Act

While soft-law guidelines have rapidly emerged from arbitral institutions, the European Union’s AI Act represents the most comprehensive hard-law approach to date, scheduled for full implementation starting in 2026. The Act adopts a risk-based framework, imposing strict compliance requirements on providers and deployers of high-risk AI systems. These systems include those intended to influence the outcome of elections or be used in the administration of justice and democratic processes, which could potentially encompass certain automated online dispute resolution (ODR) mechanisms, or sophisticated e-discovery tools used to analyze evidence and facts in an arbitration.

The Act primarily regulates the supply of AI, but its extraterritorial effect means that arbitrations seated in the EU, or those utilizing tools developed or supplied by EU-based entities, will need to account for its mandatory technical and transparency standards. These standards cover requirements for data governance, quality, human oversight, and robust accountability.

For counsel and arbitrators, this necessitates a due-diligence layer in every case. They must ensure that the specific AI tools deployed, especially those that cross borders or process data related to EU citizens, comply with the Act’s criteria, even in the context of private, contractual dispute resolution.

Practical Checklist for Counsel and Arbitrators

For lawyers advising on AI use in arbitration and ADR, the emerging materials support a concrete, governance-focused checklist:

  • Map the AI tools currently in use across the firm, client, and institution, including embedded AI inside research platforms, videoconference tools, and e-discovery systems.
  • Align internal policies with soft-law guidance from AAA-ICDR, SVAMC, SCC, and CIArb, treating those instruments as a consolidated best-practice baseline for competence, disclosure, and human oversight.
  • For each case, raise AI explicitly at the first case-management conference and record the parties’ agreement or the tribunal’s determinations in a procedural order that addresses disclosure, acceptable tools, and consequences for misuse.
  • Build confidentiality-and-security terms for AI tools into engagement letters and, where necessary, into procedural orders or protective orders, especially when models process trade secrets or regulated data.
  • Develop evidence-specific protocols for deepfake detection, authentication of digital evidence, and handling of AI-generated expert assistance, drawing on emerging commentary from arbitral-practice guides and Young ICCA’s work on AI and evidentiary risk.
  • Track developments in national-court AI guidance, bar-association ethics materials, and sector-specific regulation so that arbitration practice remains aligned with broader expectations of fairness, transparency, and accountability.

Arbitration and ADR are unlikely to become fully automated, but they are already being reshaped by AI-aware rules, standards, and expectations. For practitioners, the task now is less about answering whether AI belongs in arbitration and more about deciding, in each case, who will control it, how its use will be documented, and how to preserve a human, legally accountable core to the process.

Sources

This article was prepared for educational and informational purposes only. It does not constitute legal advice and should not be relied upon as such. All cases, sanctions, and sources cited are publicly available through court filings and reputable media outlets. Readers should consult professional counsel for specific legal or compliance questions related to AI use.

See also: Arizona Becoming Hub for AI-Powered Law Firms Under New Ownership Rules

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *