Judges Who Benefit From AI Technology Must Avoid Its Hazards (1)

July 30, 2025, 8:30 AM UTCUpdated: July 30, 2025, 2:59 PM UTC

Judges have a hard but critical job that is at the core of both justice and democracy. That job is about to become even more onerous due to artificial intelligence, as shown by a court opinion withdrawn last week that contained errors possibly caused by AI. How judges handle this challenge will be crucial for maintaining and building public confidence in our courts.

Like all professions, judges can greatly benefit from AI. At a time of increasing caseloads, shrunken court budgets, and demands for ever-faster justice, judges and courts can’t afford to disregard the efficiencies and conveniences that AI offers. Judges and courts across the country have stepped up to this challenge, finding ways to improve their performance using AI.

Some courts are utilizing AI to automate filing of cases, saving money and time. Others have developed chatbots to better communicate with prospective jurors and litigants. And the Arizona Supreme Court now uses an AI avatar to explain its decisions in plain English using a script reviewed and approved by the justice who wrote the original opinion, helping citizens to better understand important court rulings. These are all valid and useful applications of AI.

But AI also presents risks to courts and judges. The most notorious danger is that generative AI systems can sometimes hallucinate—that is, create fake facts or citations. Many attorneys have already fallen victim to this, as a database keeping track of such AI fabrications filed with courts by attorneys now exceeds 230 examples.

I have been giving seminars to judges for several years about AI, and I always conclude by warning it is probably only a matter of time before a judge includes a fake AI-created citation in their opinion, and you don’t want to be the judge making that mistake.

That warning about being the first is now moot, as the first instance of a judge likely including AI fabrications in their opinion has now been reported. A federal district court judge in New Jersey withdrew a decision on July 23 that contained misstated quotes and fake citations.

Although the judge hasn’t admitted that the errors resulted from using AI, the case has all the hallmarks of an AI hallucination. The incident occurred just a few weeks after a Georgia appeals court issued a decision overturning a lower court opinion that contained fabricated citations also likely generated by AI. And over in Mississippi, lawyers in the state attorney general’s office are asking a federal judge why a withdrawn order included errors such as made-up parties and quotes.

Such cases can undermine public confidence in courts at a time when the role of trustworthy courts has never been more important. To prevent this problem from reoccurring, it’s important to understand how these fake citations happened.

The large language models underlying generative AI tools such as ChatGPT, Gemini, and Claude are easy and convenient to use, but they don’t fully understand the verbiage they churn out. Rather, their outputs are based on probabilistic guesses that are often right, but not always. The companies behind these AI tools have made some progress in reducing the frequency of hallucinations, but haven’t eliminated them.

So how might these hallucinations end up in judicial decisions? There are several possible pathways. One is that judges themselves may use AI in writing their opinions. Influential guidelines approve the use of AI by judges to help research cases; review lengthy pleadings, expert reports, deposition and trial transcripts, and court precedents that may be relevant to a case; and edit a draft opinion.

But judges should only use AI to draft language in their opinions with extreme caution, and they should never let AI decide a case. Justice requires that a duly appointed human judge make actual decisions. And if a judge uses AI in any way to craft their decision, they must carefully and independently check the accuracy of the facts, quotes, and citations in the draft opinion.

Another potential pathway for hallucinations in judicial opinions is judicial clerks, which federal judges and higher-level state judges have. These clerks are usually recent law school graduates and assist judges in researching and preparing opinions. Most law students today are using AI, so they are likely to use such tools in their research and drafting for judges.

If a clerk uses AI to draft text and doesn’t correct fake citations that were generated, the judge may incorporate the fake citations into their opinion. The solution isn’t to prohibit clerks from using AI, as such tools increase their efficiency. Also, when organizations ban the use of AI, many employees still use it surreptitiously—called shadow AI—where it less likely to be supervised and checked. Instead, judges must either check the citations in the text from their clerks themselves, or confirm the clerks have validated all citations and quotes in their text.

Finally, fake citations can sneak into judicial opinions through briefs filed by litigants. In the past, judges could generally rely on citations in briefs filed by experienced lawyers, but that is no longer the case. The judge or their clerks must now check the validity of every citation in the parties’ briefs, especially if the court will rely on or cite to those precedents.

All of these steps will increase the workload of already overburdened judges. However, the consequence of not conducting such checks is that more judicial opinions would contain fake citations and quotes, undermining credibility and public confidence in our public courts. Neither justice nor democracy can withstand any further AI-related embarrassments.

This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners.

Author Information

Gary Marchant is a law professor who teaches, researches and speaks about AI and the law at Arizona State University’s Sandra Day O’Connor College of Law.

Write for Us: Author Guidelines

To contact the editors responsible for this story: Daniel Xu at dxu@bloombergindustry.com; Jada Chin at jchin@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.