X
April 2024 Cover Story - How AI Rules Will Become ADR Rules

by Ryan Abbott and Brinson S. Elliott

Guidelines, rules, and standards are emerging for the use of AI in Alternative Dispute Resolution (ADR). Late last year, the United Kingdom issued guidance for courts, cautioning about risks and the responsible use of AI for legal research and analysis,1 and the California2 and Florida3 State Bar Associations released professional responsibility and conduct guidance for practitioners’ use of Generative AI. These guidelines came after judges in over a dozen United States federal courts issued standing orders for AI.4 Some courts now prohibit litigants’ use of AI for filing preparation, while others require attorneys to attest that a human reviewed any Generative AI outputs used for document drafting. Other orders require disclosure and certification of AI-assisted research and verification that citations created using AI (generative or otherwise) are accurate. Some are questioning the broad nature of these latter models, noting they could require practitioners to disclose the use of any AI-assisted search engines and chatbots or “seemingly innocuous programs like Grammarly.”5 This variation is representative of broader debates around the use of emerging technologies in legal and dispute resolution contexts.

Background on AIDR
Despite all the attention paid to artificial intelligence (AI) and approaches to its governance and regulation, it continues to lack a generally accepted definition. We adopt a definition of AI that focuses on its functionality rather than how it was programmed, believing the law should focus on regulating AI behavior: An algorithm or machine capable of completing tasks that would otherwise require cognition.6

From the 1970s until recently, AI models used in ADR were primarily rules-based, requiring programmers to manually code all foreseeable inputs and outputs for any given dispute. These inputs and outputs resembled human if-then logic, linking facts, legal rules, and conclusions. The AI model documented its reasoning in a decision tree, making it explainable and traceable to human actors. One early AIDR system (ADR system utilizing AI) developed by the RAND Corporation required several thousand if-then rules, exemplifying the level of technical skill required to build a system capable of handling even relatively straightforward disputes in narrowly defined areas with known parameters.7

Modern AI systems, including the foundation models underpinning ChatGPT, Google Bard, and Microsoft CoPilot, have relied on machine learning, which uses statistical methods to make classifications or predictions. The capabilities of such systems have improved dramatically in recent years largely due to the availability of Big Data (voluminous and complex datasets) used to train machine learning-based systems, coupled with advances in software designs and greater availability of computing power.

Besides rapid acceleration in the development of emerging technologies, including those with application in legal and dispute resolution contexts, circumstances such as the Covid-19 pandemic have facilitated the uptake of online dispute resolution (ODR) systems, such as those that leverage AI for document-sharing, video conferencing, and case intake.8

Approaches to Regulating ADR and AIDR
In the absence of agreed-upon and enforceable qualification and licensing requirements, standards for neutral behavior and responsibilities, procedural safeguards of adjudication, and judicial review except in instances of neutral misconduct, some have concluded that ADR is subject to little to no regulation, authority, standards or monitoring, making it an “informal system”9 and a “largely unregulated industry” operating behind closed doors.10 Some commentators also argue that the “breadth, reach and enforcement mechanisms” for existing private and court ADR rules of practice make “an ethics of ADR become highly pluralistic, substantively conflictual and procedurally cumbersome.”11 For these reasons, some question the quality of ADR in the absence of procedural and institutional safeguards and enforcement mechanisms.12

Although ADR is not formally regulated in the same manner or to the same extent as legal practice or traditional litigation, existing laws apply despite not being ADR-specific. Examples include professional standards for licensed attorneys working in ADR and laws protecting the use of information in ADR proceedings. These rules and standards also apply to the development and use of AI systems in ADR, as do some existing and emerging standards and regulations that specifically govern the design, development, and deployment of AIDR systems.

Spectrum of AIDR Systems
How AI impacts ADR processes and the role of the neutral (e.g., third-party negotiator, mediator, or arbitrator) depends, among other things, on the type of technology, the functions and purposes for which it is used, and the opportunities for human oversight and intervention. It is helpful to think of AIDR as existing on a loose spectrum:

Assistive technologies, which can support, inform, or make recommendations to neutrals, occupy one end, and automative technologies, which can partially or fully automate discrete tasks and, in some cases, even replace neutrals, occupy the other.13

Assistive technologies can help reduce the burden of high-volume repetitive tasks, including time-consuming administrative and procedural requirements (e.g., case-intake and document management), and provide neutrals with informational resources that support informed, accurate, and efficient decision-making. Because these technologies neither fundamentally alter ADR processes nor determine case outcomes, their development and use are generally supported in the ADR literature.14

Beyond benefiting neutrals’ workflows, assistive technologies can make ADR processes more accessible to disputants, who sometimes pursue ADR instead of traditional litigation because of its relative efficiency, affordability, and reliability.15 Assistive AIDR is therefore well positioned to meet ADR’s core objectives of providing disputants with a fair, efficient and economical resolution process.16

Automative technologies17 can help facilitate or independently perform legal research, document preparation and analysis, case negotiation, settlement, award and resolution plan drafting, and decision-making functions.18 AI systems are increasingly able to do this with a speed and scale that outpaces human ability, although it is generally difficult to find objective evidence of how accurate the systems are.19 Automative technologies are sometimes used to autonomously resolve minor, relatively straightforward disputes or to provide system outputs to support and inform human decision making.20 These systems can also empower self-represented litigants by providing informational resources (e.g., accurately forecasted case outcomes) that can help advise whether to pursue ADR altogether.21

As an example of an AIDR system, the British Columbia Civil Resolution Tribunal (CRT) is an AI expert system that provides disputants with a negotiation forum and independently performs case intake, management, and communications.22 If parties cannot reach an agreement in the automated environment, a human tribunal member will oversee the duration of the resolution process, placing the CRT somewhere in the middle of the AIDR spectrum. The CRT’s Solution Explorer, which offers free legal information and dispute assessment tools, was used over 30,000 times between April 2022 and March 2023.23 Only 24% of explorations led to a claim, suggesting that the platform may have helped users resolve their disputes at an earlier stage than they otherwise might have. These services can help alleviate concerns that ADR favors more powerful and well-resourced disputants.

AIDR Risks and Challenges
What gives AI systems transformational potential may present a weakness in the ADR context. Machine-learning-based AI systems derive rules from correlative patterns in data and then apply those rules to new data. However, laws and rules do not provide “the kind of structure that can easily help an algorithm learn and identify patterns and rules.”24 Conflicts can involve multiple areas of law (e.g., tort, property, insurance, family) and disputants from different jurisdictions, which can complicate or prevent the “specialization into specific case types” necessary for training and instructing AI.

Further, most existing AI systems cannot independently execute significant tasks without any human oversight.25 The analysis and interpretation necessary to apply rules to new facts often require the ability to navigate subtle contextual differences, such as whether a behavior was ‘reasonable’ or an outcome ‘foreseeable’ in a particular situation. Human neutrals also often rely on experiences, knowledge, and normative judgments to assess disputants’ reliability and deal with social and emotional issues.26 Because complex and disputed fact sets are a feature of many cases, and AI is not currently capable of accurately measuring human credibility,27 it may not be well equipped to automate these interpretive, human aspects of ADR.

Parties in legal and dispute resolution contexts possess rights to a reasoned decision and due process. Some AI systems operate and produce predictions, recommendations, or decisions in a way that is not explainable or understandable to system users. The opacity of these “black box” AI systems makes it difficult to verify whether their outputs are valid and reliable or if there are underlying biases or errors. Not being able to access or understand the basis of a decision undermines disputants’ rights to a reasoned decision and their right to challenge and appeal a decision.

While some support AI automation in limited instances, such as high-volume and low-value disputes, or for low-complexity cases involving developed bodies of law (e.g., traffic violations), other critics conclude that automative technologies should never replace humans in dispute resolution and legal processes insofar as they lack human reasoning and common sense, and therefore cannot achieve true fairness and justice.28 United States Chief Justice John Roberts recently expressed a similar view29 that, while flawed, human adjudications are presently fairer than machine outputs, concluding that “machines cannot fully replace key actors in court.”

Existing Rules and Standards for AIDR
UNCITRAL has been publishing conventions, model laws, and rules for international commercial trade law since 1966. Though only one set of standards, the UNCITRAL rules are a respected global benchmark used by professional associations, chambers of commerce, and arbitral institutions.

In 2016, UNCITRAL affirmed that all ADR rules and standards, including confidentiality, due process, independence, neutrality, and impartiality, apply equally to ODR and that fairness, transparency, due process, and accountability should underlie all ODR processes. Its Expedited Arbitration Rules likewise confirm that technology users must abide by fair proceedings rules and that neutrals should give disputants “an opportunity to express their views on the use of such technological means and consider the overall circumstances of the case, including whether such technological means are at the disposal of the parties.”30

The Regulatory Landscape for AI and AIDR
The regulatory landscape for AI is dynamic and uneven. Here, we focus primarily on the European Union (EU) since it recently became the first major Western jurisdiction to develop omnibus legislation to regulate AI.

In early December 2023, the European Commission for the Efficiency of Justice (CEPEJ) adopted a set of guidelines for ODR31 that reflect existing standards and practices for ADR, including those articulated by UNCITRAL.32 Among other things, they state that ODR and AIDR systems and their deployers should provide clear and transparent rules and easy, efficient, effective and reliable processes; not infringe on data protection rights; adopt technical measures that comply with the latest standards for safety, fairness, and efficiency; have sufficient knowledge of the technology being used, including its potential risks and negative impacts; and ensure the effective participation of parties, such as by helping them understand all steps in the procedure, the outcome and the effect of the agreement.33

In December, the CEPEJ also adopted an Evaluation Tool34 that assesses compliance with the five principles of the 2018 European Ethical Charter on the use of artificial intelligence in judicial systems and their environment, which includes ODR: respect for fundamental rights in the design and use of AI tools; non-discrimination; data quality and security; transparency, impartiality and fairness; and under user control. Technology applications must also not undermine rights granted in all civil, commercial, and administrative hearings: access to a court; adversarial principle; equality of arms; impartiality and independence of judges; and right to counsel.

The CEPEJ’s perspective35 on assistive versus automative technologies is consistent with the broader ADR literature, viewing tools that do not “affect the actual administration of justice” as typically low-risk and stating that AI systems used to assist with research, interpreting facts and law, and applying law to concrete sets of facts “should not affect the independence of judges in their decision-making process,” which “should remain a human-driven activity and decision.” To this end, in the 2018 Charter, the CEPEJ referenced section 22 of Europe’s General Data Protection Regulation (GDPR), which allows persons “to refuse to be the subject of a decision based exclusively on automated processing,” when the automated decision is not required by law, and entitles them to decisions made by human decision-makers.36 Both the EU GDPR and United Kingdom (UK) Data Protection Act (2018) afford data subjects rights to be informed about and object to the use of automated decision systems, and to access meaningful information about how the system works and its potential consequences.

Interestingly, the CEPEJ postponed the release of its 2021 Roadmap to accommodate the introduction of the European Union Artificial Intelligence Act (EU AI Act) in 2021. After many months of negotiation, Members of Parliament reached a political deal on the AI Act on December 8, 2023.37 They agreed to release the provisional text38 in February 2024,39 and all rules should become fully applicable 24 months after the Act goes into force.40

Consistent with the 2018 Charter, the EU AI Act views the use of AI technologies in the administration of justice as a high-risk application subject to the following mandatory requirements before systems can be released on the market:41

High risk – Conformity assessment to demonstrate compliance with mandatory requirements for trustworthy AI (data quality, documentation and traceability, transparency, human oversight, accuracy, cybersecurity, and robustness) and quality and risk management systems.

Proposed amendments to EU product liability laws42 accord with the AI Act43 by making providers and manufacturers liable to compensate injured parties when AI or AI-enabled hardware or software products are defective, cause personal injury, property damage, data loss, or privacy breach. Critically, defectiveness is defined broadly to include “the effect on the product of any ability to acquire new features or knowledge after it is placed on the market or put into service,” seemingly referencing machine learning systems that can acquire new behaviors from ingesting and learning from new data.

The United Kingdom and the United States are also seeking to promote the responsible development, deployment, and use of AI.

According to the UK Information Commissioner’s Office,44 those who explicitly consent to automated decision system processing have a right to an explanation of the system’s decision as well as several other underlying features: rationale, responsibility, data, fairness, safety and performance, and impact. By encouraging rights to a reasoned decision and due process, explainability statements can help overcome concerns around black-box AI systems in legal and judicial contexts.

Some U.S. state privacy laws45 afford residents rights to receive meaningful explanations about AI system logic and to opt-out of automated decision-making in certain contexts. Federally, the United States has indicated46 it also views the use of AI in judicial and ADR processes as high-risk, requiring stringent protections such as, “(a) the ability to opt out of ADR processes involving automated technologies; (b) access to an explanation of how the system operates and why it arrived at its resolution, so parties can challenge or appeal the decision; and (c) comprehensive privacy-preserving security measures for systems that use, process or extract sensitive data about individuals.”

How AI Rules Will Become ADR Rules
Both AI and ADR are regulated through rules that apply to more general areas, including privacy and advertising practices.47 Rules that apply to ADR, such as conflict disclosures, also apply to AI used in ADR. The emerging body of rules for AI will likewise apply to ADR.

AI is already part of many ADR processes. As AI capabilities improve, AIDR adoption will grow, and traditional ADR systems will face pressure to incorporate AI. The CEPEJ, for instance, is already directing EU member states to identify areas and sectors that could be made more effective and efficient by online ADR and supports their “use of technologies in ADR through the adoption of soft law instruments,” such as guidelines and recommendations.48

In 2020, the European Committee on Legal Affairs suggested that deployers are in control of AI system risks, and thus have liability for AI-generated harms.49 This reasoning may make human neutrals liable for harms caused by AI systems in ways they would not have been had they caused similar harm directly. For example, a neutral may be liable for using an AI system that operates with a systemic racial bias. This enhanced liability may encourage greater attention to AIDR system design, procurement, and deployment.

Human decision-making cannot be interrogated in the same way as an AI system. Though practitioners can be held liable for racially motivated behavior, a human neutral will rarely admit to racial bias. Instead, they are likely to justify an award in a reasoned decision based on permissible criteria. Where possible to detect conscious or unconscious neutral bias, this finding is unlikely to provide adequate justification for challenging a particular award’s validity. Even where very clear patterns emerge, such as that a statistically significant number of a neutral’s awards rule against disputants of a particular race, it will be very difficult to prove causation. Human neutrals are rarely held accountable or disciplined for errors or biases in judgment. AI systems, by contrast, can be evaluated for statistical error or bias, and reprogrammed or decommissioned if revealed to be producing inaccurate or invalid outputs. Emerging technologies can therefore drive unique ADR accountability mechanisms.

If emerging rules hold AI systems to higher standards than human neutrals, such as enhanced transparency and explainability, then these rules may help overcome some of the long-felt needs in ADR governance.

ENDNOTES

  1. Courts and Tribunals Judiciary, “Artificial Intelligence (AI) Guidance for Judicial Office Holders,” (2023), https://www.judiciary.uk/wp-content/uploads/2023/12/AI-Judicial-Guidance.pdf.
  2. The State Bar of California, Practical Guidance for the Use of Generative Artificial Intelligence in the Practice of Law (2023), https://www.calbar.ca.gov/Portals/0/documents/ethics/Generative-AI-Practical-Guidance.pdf.
  3. The State Bar of Florida, “Proposed Advisory Opinion 24-1 Regarding Lawyers’ Use of Generative Artificial Intelligence – Official Notice” (2023), https://www.floridabar.org/the-florida-bar-news/proposed-advisory-opinion-24-1-regarding-lawyers-use-of-generative-artificial-intelligence-official-notice/.
  4. Jessiah Hulle, AI Standing Orders Proliferate as Federal Courts Forge Own Paths, Bloomberg Law (2023), https://news.bloomberglaw.com/us-law-week/ai-standing-orders-proliferate-as-federal-courts-forge-own-paths.
  5. Hon. Bernice Bouie Donald, Hon. James C. Francis IV, et al., Generative AI and Courts: How Are They Getting Along? PLIChronicle (2023), https://www.jamsadr.com/files/uploads/documents/articles/francis-james-pli-generative-ai-1023.pdf.
  6. Ryan Abbott, The Reasonable Robot—Artificial Intelligence and the Law, Cambridge University Press, 22 (2020).
  7. D. A. Waterman & Mark A. Peterson, Models of Legal Decisionmaking, The Institute for Civil Justice, RAND Corporation, 1-74 (1981).
  8. Dave Orr & Colin Rule, Artificial Intelligence and the Future of Online Dispute Resolution, Artificial Intelligence and Its Impact on the Future of ADR Conference, Albany, NY State Bar Association (2019).
  9. Carrie Menkel-Meadow, Ethics in Alternative Dispute Resolution: New Issues, No Answers from the Adversary Conception of Lawyers, Responsibilities Symposium – The Lawyer’s Duties and Responsibilities in Dispute Resolution: South Texas Law Review 38, 407-455 (1997).
  10. Laurie Kratky Dore, Public Courts Versus Private Justice: It’s Time to Let Some Sun Shine in on Alternative Dispute Resolution, Chicago-Kent Law Review 81, 463-520 (2006).
  11. Deborah R. Hensler, Our Courts, Ourselves: How the Alternative Dispute Resolution Movement Is Re-Shaping Our Legal System, Dickinson Law Review 122, 349-382 (2017).
  12. Elizabeth Rolph, Erik Muller, et al., Escaping the Courthouse: Private Alternative Dispute Resolution in Los Angeles, Journal of Dispute Resolution 2, 277-324 (1996).
  13. For this reason, AI is sometimes referred to as the “fourth party” in an ADR process, with AI developers being the “fifth party” due to their control over AI’s underlying rules and logic and training data.
  14. John Zeleznikow, Using Artificial Intelligence to Provide Intelligent Dispute Resolution Support, 30 Group Decision and Negotiation 30, 789-812 (2021).
  15. Davide Carneiro, Paulo Novais, et al., Online Dispute Resolution: An Artificial Intelligence Perspective, Artificial Intelligence Review 41, 211- 240 (2014).
  16. United Nations Commission on International Trade Law, “Status: UNCITRAL Model Law on International Commercial Arbitration (1985), with Amendments as Adopted in 2006” (2006).
  17. Commentators are generally more skeptical of automative AIDR than assistive systems because their outputs can indirectly or directly shape case outcomes, sometimes with little to no human oversight or intervention, as in the case of automated decision making.
  18. Casetext: “Casetext Unveils Cocounsel, the Groundbreaking AI Legal Assistant Powered by OpenAI Technology,”PR Newswire (Mar. 1, 2023), https://www.prnewswire.com/news-releases/casetext-unveils-cocounsel-the-groundbreaking-ai-legal-assistant-powered-by-openai-technology-301759255.html.
  19. An AI system developed by Cambridge University researchers predicted the outcomes of 775 financial ombudsman cases with greater accuracy (87%) than a group of 100 experienced lawyers (62%), for example. Tashea, Jason. 2017. “Artificial Intelligence Software Outperforms Lawyers (without Subject Matter Expertise) in Matchup,” Legal Technology, ABA Journal, Nov. 3, 2017.
  20. Dovilė Barysė & Roee Sarel, Algorithms in the Court: Does It Matter Which Part of the Judicial Decision-Making Is Automated?, Artificial Intelligence Law (Dordrecht) 8, 1-30 (2023).
  21. Sterling Miller, The Problems and Benefits of Using Alternative Dispute Resolution, Thomson Reuters (Apr. 29, 2022), https://legal.thomsonreuters.com/en/insights/articles/problems-and-benefits-using-alternative-dispute-resolution.
  22. British Columbia Civil Resolution Tribunal 2023, https://civilresolutionbc.ca/.
  23. Civil Resolution Tribunal, 2022/2023 Annual Report, https://civilresolutionbc.ca/wp-content/uploads/CRT-Annual-Report-2022-2023.pdf.
  24. Orr, supra
  25. Joe McKendrick & Andy Thurai, AI Isn’t Ready to Make Unsupervised Decisions, Harvard Business Review, 15 (Sept. 2022), https://hbr.org/2022/09/ai-isnt-ready-to-make-unsupervised-decisions.
  26. Mareike Schoop, Aida Jertila, et al., Negoisst: A Negotiation Support System for Electronic Business-to-Business Negotiations in E-Commerce, Data & Knowledge Engineering 47:3, 371-401 (2003).
  27. Anastasia Shuster, Lilah Inzelberg, et al., Lie to My Face: An Electromyography Approach to the Study of Deceptive Behavior, Brain and Behavior 1-12 (2021).
  28. Robert J. Condlin, Online Dispute Resolution: Stinky, Repugnant, or Drab?, University of Maryland School of Law, Faculty Scholarship, 717-758 (2017).
  29. John G. Roberts, Jr., 2023 Year-End Report on the Federal Judiciary, https://www.supremecourt.gov/publicinfo/year-end/2023year-endreport.pdf.
  30. United Nations Commission on International Trade Law, UNCITRAL Expedited Arbitration Rules (2021).
  31. European Commission for the Efficiency of Justice (CEPEJ), Guidelines on Online Alternative Dispute Resolution, 1-14 (2023), https://rm.coe.int/cepej-2023-19final-en-guidelines-online-alternative-dispute-resolution/1680adce33.
  32. For example, neutrals must disclose conflicts of interest or biases impacting their ability to be independent and impartial; give parties reasonable opportunities to present their cases and treat them equally; conduct fair and efficient hearings, avoiding unnecessary delays and expenses; make decisions about the relevance, admissibility, and weight of disputant’s evidence; and provide reasoning for their award decisions (Expedited Arbitration Rules 2010).
  33. CEPEJ, supra note 31.
  34. CEPEJ, Assessment Tool for the Operationalisation of the European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and Their Environment, 1-17 (2023), https://rm.coe.int/cepej-2023-16final-operationalisation-ai-ethical-charter-en/1680adcc9c.
  35. Id.
  36. CEPEJ, European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and Their Environment, Council of Europe 1-79 (2018), https://rm.coe.int/ethical-charter-en-for-publication-4-december-2018/16808f699c.
  37. Council of the European Union, Artificial intelligence act: Council and Parliament strike a deal on the first rules for AI in the world (2023), https://www.consilium.europa.eu/en/press/press-releases/2023/12/09/artificial-intelligence-act-council-and-parliament-strike-a-deal-on-the-first-worldwide-rules-for-ai/#:~:text=Following%203%2Dday%20’marathon’,so%2Dcalled%20artificial%20intelligence%20ac.
  38. On January 22, 2024, Luca Bertuzzi, Technology Editor at Euractiv, shared unofficial versions of the consolidated text online: https://drive.google.com/file/d/1xfN5T8VChK8fSh3wUiYtRVOKIi9oIcAF/view.
  39. Supantha Mukherjee, Martin Coulter, and Foo Yun Chee, Explainer: What’s next for the EU AI Act?, Thomson Reuters (Dec. 14, 2023), https://www.reuters.com/technology/whats-next-eu-ai-act-2023-12-14/.
  40. European Commission, Artificial Intelligence – Questions and Answers (updated Dec. 14, 2023), https://ec.europa.eu/commission/presscorner/detail/en/QANDA_21_1683.
  41. Id.
  42. On October 30, 2023, the European Parliament’s Internal Market and Consumer Protection Committee (IMCO) and Committee on Legal Affairs (JURI) voted to approve the amendments, https://ec.europa.eu/commission/presscorner/detail/en/ip_22_5807.
  43. According to the European Commission, “The AI Act and the AI Liability Directive are two sides of the same coin: they apply at different moments and reinforce each other.” https://ec.europa.eu/commission/presscorner/detail/en/QANDA_22_5793.
  44. Information Commissioner’s Office, Explaining Decisions Made with Artificial Intelligence (2020), https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/explaining-decisions-made-with-artificial-intelligence/.
  45. International Association of Privacy Professionals, U.S. State Privacy Legislation Tracker, https://iapp.org/resources/article/us-state-privacy-legislation-tracker/.
  46. White House Office of Science and Technology Policy, Blueprint for an AI Bill of Rights—Making Automated Systems Work for the American People (2022).
  47. Michael Atleson, Keep Your AI Claims in Check, Federal Trade Commission (2023), https://www.ftc.gov/business-guidance/blog/2023/02/keep-your-ai-claims-check.
  48. CEPEJ, supra note 31.
  49. Committee on Legal Affairs. “Draft Report with Recommendations to the Commission on a Civil Liability Regime for Artificial Intelligence,” European Parliament (2020), https://www.europarl.europa.eu/RegData/etudes/STUD/2020/621926/IPOL_STU(2020)621926_EN.pdf.

Ryan Abbott MD, Esq., FCIArb, is a Mediator and Arbitrator at JAMS, New York. He can be reached by emailing rabbott@jamsadr.com. Brinson S. Elliott is with The Cantellus Group in San Francisco, advising clients on the strategy, oversight, and governance of AI and other frontier technologies. She can be reached at brinson.elliott@cantellusgroup.com.

This original article was adapted by the authors from the previous, longer piece: Ryan Abbott and Brinson S. Elliott, “Putting the Artificial Intelligence in Alternative Dispute Resolution – How AI Rules Will Become ADR Rules,” Amicus Curiae, The University of London School of Advanced Study (2023), https://journals.sas.ac.uk/amicus/article/view/5627.