AI-Fueled Alternative Dispute Resolution Is Law’s Next Frontier

Nov. 1, 2024, 8:30 AM UTC

Artificial intelligence and alternative dispute resolution both are rapidly expanding in the legal field. AI aims to enhance efficiency through technological advancements, while ADR achieves it through procedural innovation.

As AI’s capabilities improve, the convergence of AI and ADR is inevitable, and it could fundamentally change our judicial system.

In the US, AI in ADR generally has been limited to document analysis, legal research, evaluating case arguments, generating offers, marketing, and billing. However, the full realization of AI’s potential in the legal field is the creation of an autonomous AI court system that functions as an alternative to our traditional court system. No human judges, juries, or lawyers.

This new form of dispute resolution would feature AI agents representing litigants and AI decision-makers adjudicating cases. Disputes could be resolved in minutes, decision-makers could be coded with comprehensive knowledge of the law and objectivity, and litigants would receive equally competent representation. In this system, the long-sought ideals of blind justice and equal protection under the law could be more achievable than ever.

The convergence of AI and ADR reflects a natural progression driven by clients seeking to reduce litigation time and costs. While autonomous AI court systems are years away, the path toward this endpoint soon would raise fundamental questions about the need and efficacy of traditional legal institutions.

Compared with backlogged traditional courts, an autonomous AI court system could offer a highly efficient method of resolving disputes. Parties first would submit their claims, facts, defenses, and desired relief to an AI-powered online platform. The plaintiff would be supported by an AI agent, which would review the case details, consult with the plaintiff, gather additional information, research legal precedents, and present the strongest argument possible.

The defendant’s AI agent would undertake the same steps in preparing the defense. Once both parties have prepared their cases, the AI decision-maker would assess the arguments and render a decision—potentially in minutes, or even seconds. Although the idea of a legal system without human lawyers may be unsettling, this new form of ADR could level the playing field. AI decision-makers and AI-powered legal representation could create a justice system that is truly impartial.

Free from political pressures, coded for maximum objectivity, and equipped with comprehensive legal knowledge, AI decision-makers might deliver more equitable and consistent rulings than humans. Litigants could receive equal competent representation, helping to improve fairness. By reducing delays, an AI court system could achieve the swift justice that our current judicial system often struggles to provide.

However, three key issues hinder the development of an AI court system.

First, the technology required to create an autonomous AI court system is unavailable. AI-powered tools still fall short of the capabilities of human lawyers. Large language models remain prone to hallucinations and biases, are unable to handle complex legal inquiries from start to finish, and lack the ability to empathize or understand the human aspects of legal disputes.

Second, the process of refining the LLM model that forms the foundation of the AI decision-maker would spark widespread debate and controversy. The core issue lies in the creation of the AI decision-maker, and the data used to train it, which would determine whether it can deliver fair outcomes.

Bias in training data could influence decisions unintentionally, and the choice of whether to train LLMs solely on legal precedents or to include cultural, social, and local norms introduces further questions about fairness and justice. Additionally, AI court systems may struggle to gain legitimacy and public trust. Without community buy-in and a robust system for appeals and oversight, AI-driven decisions could face resistance and be viewed as illegitimate.

And third, the potential for malfeasance in an AI court system is undeniable. Bad actors with access to algorithms, data, and codebase powering the AI platform could alter internal workings of LLM models, influence case outcomes, and sow distrust.

Assuming technological advancements address these challenges, the first adopters of an AI court system would likely be local governments, developing countries, or large corporations.

Local governments could use AI court systems for simple, non-controversial disputes, such as parking tickets or minor civil violations. In developing countries, AI court systems could offer more reliable, transparent justice and counteract corruption.

By providing consistent and fair dispute resolution, these systems could foster stronger institutional values and encourage foreign investment. But this would come with many challenges because developing countries typically lack the resources, technological capabilities, and capacity to implement such novel technology.

In the governmental and country context, AI court systems could be introduced in a hybrid model, where AI platforms assist human decision-makers. Long testing phases would refine the AI system, allowing for gradual improvements and assessments before a complete transition.

More likely, the very first adopters of AI dispute resolution platforms would be large companies—and, by extension, unwitting consumers—due to strong financial incentives. It’s already common for companies to include ADR clauses in consumer agreements, compelling consumers to resolve disputes through arbitration instead of traditional courts.

Two recent high-profile examples involving the Walt Disney Co. and Uber Technologies Inc., which forced injured and deceased consumers into arbitration, highlight how far companies would go to keep litigation out of traditional courts.

Simultaneously, large companies are rapidly integrating AI into their operations to cut costs and increase efficiency. Given these incentives and aggressive approaches to mitigating legal risks, companies soon may adopt AI-powered dispute resolution platforms as a cost-effective, risk-reducing alternative.

Is the promise of faster, lower-cost access to justice and equal representation worth the tradeoff of removing human decision-makers from the legal process? As we approach the possibility of autonomous AI court systems, such ethical questions are also inevitable.

This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.

Author Information

Oliver Roberts is co-head of Holtzman Vogel’s AI practice group at and CEO and co-founder of Wikard, a legal AI technology firm.

Write for Us: Author Guidelines

To contact the editors responsible for this story: Rebecca Baker at rbaker@bloombergindustry.com; Daniel Xu at dxu@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.