How AI Hallucinations Are Tripping Up Lawyers and Novices Alike

Oct. 1, 2025, 8:30 AM UTC

The Bottom Line

  • Even as courts hand out punishments for failure to check generative artificial intelligence hallucinations, litigants continue to rely too much on the technology.
  • Courts are growing less lenient with pro se litigants and licensed attorneys, with punishments ranging from warnings to major fines.
  • Firms must improve training and regulations for using GenAI to avoid financial and reputational harm.

Nearly two years have passed since the first federal court sanctioned attorneys for citing nonexistent cases created by generative artificial intelligence. Despite that case’s lessons—the importance of candor to the court, the dangers of blind reliance on novel and ill-understood technology, and the need for litigants to understand how GenAI works—the rate at which litigants have been cited for misuse of GenAI in the courtroom has steadily increased.

Indeed, far beyond the handful of headline-grabbing instances in which a party cited a hallucinated (incorrect or non-existent AI-generated) case, we note a whopping 66 opinions in which a court has reprimanded or sanctioned the misuse of GenAI.

At this rate, the use of hallucinated cases is a noteworthy contaminant that deserves more aggressive training and attention. In recent months, at least two federal judges have published (and promptly withdrawn) written opinions bearing the hallmarks of GenAI hallucinations, referencing “improper parties and factual allegations” and allegedly containing “numerous” citation inaccuracies.

Although a substantial number of litigants whom courts have called out for the improper use of GenAI have been pro se, licensed attorneys continue to face such charges. Courts have issued new standing orders, local rules, case-specific guidance, and written decisions addressing parties’ reliance on GenAI tools. And while many courts have imposed specific new requirements governing courtroom GenAI usage, many have relied solely on existing procedural and ethical rules to hold litigants accountable for citing fictitious cases.

As of Sept. 30, we have identified and are monitoring 232 combined local rules, standing orders, or court opinions concerning GenAI use and misuse in the courtroom.

Pro Se Misuse

To those unacquainted with the complexities of litigation, GenAI offers a tantalizing alternative to traditional legal research. Many of the decisions levying sanctions for GenAI abuse arise in the context of self-represented parties. As one court has observed, misuse of GenAI “is becoming an increasing problem among pro se parties.”

Some pro se plaintiffs have learned the hard way that overreliance on these tools can result in not just reprimands from the court, but monetary fines, sanctions, and even dismissal of their claims.

In Aponte v. Portfolio Recovery Associates and Al-Hamim v. Star Hearthstone LLC, pro se plaintiffs included AI-generated citations in their briefing. Although in both cases the courts declined to impose sanctions, they cautioned the litigants regarding improper use of GenAI.

In Aponte, the court accepted the pro se plaintiff’s explanation and apology for the false citations but recommended that he “cultivate a healthy skepticism” regarding “materials that are (or could have been) generated by artificial intelligence.”

Some pro se litigants are spared sanctions for using GenAI to assist with briefing, likely given their perceived lack of familiarity with the mechanics of preparing filings. Nevertheless, courts have made clear their typically liberal construction of pro se filings is no excuse for permitting hallucinated citations, likening the use of GenAI for drafting to “ghostwriting,” which “‘evades the requirements’ of Rule 11.”

Not all pro se litigants have enjoyed leniency. In Morgan v. Community Against Violence, the court dismissed a pro se plaintiff’s claims with prejudice when her filings included multiple citations to “several fake or nonexistent opinions.”

In a subsequent ruling, after the party failed to change her behavior, although the court didn’t impose monetary sanctions, it required the pro se plaintiff to show cause for why she shouldn’t be prohibited from proceeding without representation, given the “citations to hallucinated cases,” failing to meet filing deadlines, and violating local rules.

Likewise, in Thomas v. Pangburn, the Southern District of Georgia dismissed a pro se plaintiff’s amended complaint based on the citation to various cases “that simply did not exist.” The plaintiff could neither identify the sources relied on while drafting the complaint nor determine where the “sham cases” were from.

Some courts have explained that self-represented litigants may not be fully aware of the risks of using GenAI tools when preparing court filings. In Dukuray v. Experian Information Solutions, the court declined to impose sanctions on a pro se plaintiff after acknowledging pro se litigants typically lack access to legal research databases and may be unaware of GenAI’s risk of “generating fake case citations.”

But other pro se plaintiffs have been taken to task for their misuse of GenAI despite these disadvantages. In Kruse v. Karlen, the pro se appellant indicated he was unaware that his consultant, who held himself out as an attorney and assisted with briefing, would use “artificial intelligence hallucinations.”

While the Kruse court acknowledged the unique challenges pro se parties face, it concluded the AI-associated errors were more than minor. The appellant was ordered to pay $10,000 of the opposing party’s attorneys’ fees associated with identifying the AI-generated work.

Decisions such as these—as well as rules specifically cautioning pro se litigants on the use of GenAI—demonstrate that courts are becoming more willing to hold lawyers and non-lawyers to a uniform standard.

Lawyer Misuse

Courts have increasingly sanctioned or penalized licensed attorneys, and sometimes entire law firms, when counsel misuses or improperly relies on GenAI tools for legal work. Despite their obligations to the engage court with candor, there have been at least 19 cases to date in 2025 in which lawyers were found to have likely abused GenAI in their filings and been rebuked.

Courts are increasingly concerned about potentially having to expend judicial resources to ensure that parties’ citations and quotes are accurate and to address misuse. And although most US courts haven’t imposed specific rules or a standing order governing GenAI use, judges haven’t shied away from addressing misuse.

Penalty amounts vary, but as abuse increases, and courts grow increasingly frustrated, harsher penalties seem to be on the rise.

In Michael Evans v. Execushield Inc., the court reprimanded plaintiffs’ lawyers for using a tool, which the court said it understood was a program likely employing some form of AI to help prepare a brief in which the responsible attorney was informed by a law clerk that two non-existent cases “did not support the assertions”—but where neither that attorney nor any of the other six attorneys named in the brief took any “corrective action to fix the misrepresentations.”

After recounting a long list of other cases in which attorneys presented “fake opinions,” the court concluded that none of the lawyers in the case were “adequate to represent the putative class and decline[d] to appoint them” as class counsel. The court further stated that the knowing inclusion of false cases constitutes “a wholesale fabrication of quotations and a holding on a material issue” which “violates, among other things, the Rules of Professional Responsibility, the Business & Professions Code, and the [California] Code of Civil Procedure.”

Although the court in Michael Evans didn’t impose sanctions, it did order the attorneys to file a copy of the court’s order in “all other actions” in the county court where any of the counsel was an attorney of record.

In Lacey v. State Farm General Insurance Co., a court-appointed special master issued a scathing opinion declaring that “Plaintiff’s use of [Gen]AI affirmatively misled me,” where “nine of the 27 legal citations in the ten-page brief were incorrect in some way,” two cited authorities “do not exist at all,” and several quotations were “phony and did not accurately represent” the cited materials.

Plaintiff’s counsel admitted not only to using a GenAI tool to draft the brief, but the court found that “[n]o attorney or staff member at either [of the two] firms apparently cite-checked or otherwise reviewed that research.” After chastising the attorneys for reckless, bad faith conduct, the special master imposed a range of sanctions on both the law firms and the plaintiff.

Most recently, in Coomer v. Lindell (a defamation case involving the CEO of My Pillow Inc.), in which several attorneys admitted they had run a brief through AI but didn’t cite check it, the federal district court found multiple errors, including “citation of cases that do not exist.” After holding that that “Rule 11 applies to the use of artificial intelligence,” the court imposed sanctions against an attorney and his law firm ($3,000 jointly and severally) and against another attorney ($3,000 individually) for the Rule 11 violation.

Attorneys’ improper use of GenAI can result in reputational and financial consequences for the attorney who misused the tool themselves, and for entire law firms. When abuses are egregious enough, clients may also face substantial consequences, such as losing valuable time having to find new counsel while statutes of limitations continue to run—or worse, being denied their requested relief altogether.

In Kohls v. Ellison, the court excluded an expert’s testimony after finding the expert had cited fake, AI-generated sources in his declaration submitted under penalty of perjury. The court explained that even if the inclusion of fake citations was unintentional, this error “shatters [the expert’s] credibility” with the court and that “trust was broken.”

The court further emphasized that expert declarations under penalty of perjury aren’t just a formality but serve to alert “‘declarants to the gravity of their undertaking and thereby have a meaningful effect on truth-telling and reliability.’” It also noted that fake citations waste money and court resources, and they harm the legal system, so courts shouldn’t allow any wiggle room for parties that use fake citations.

Attorneys must be mindful of the consequences that their actions may cause their clients to suffer. Not all attorney misuse of GenAI has resulted in sanctions or more severe consequences. Still, the rise in GenAI misuse and the growing expectation by courts that lawyers “should know better by now” signals a decline in consequence-free reprimands.

Key Takeaways

Despite these risks, GenAI remains a powerful and promising vehicle. Especially as models are refined and designed specifically for researching and legal functions, their capacity for empowering litigants will continue to grow.

As the federal district court for the District of Oregon recently emphasized “a basic internet search seeking guidance on whether it is advisable to use AI tools to conduct legal research or draft legal briefs will explain that any legal authorities or legal analysis generated by AI needs to be verified.”

Before submitting any filing, practitioners using GenAI should review all citations with a verified legal research database to ensure they exist and are accurately quoted. An hour of time spent reviewing a final submission can help lawyers avoid spending dozens of additional hours (likely uncompensated) defending a motion for sanctions and the attendant reputational (and potentially financial) fallout.

This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners.

Author Information

Amy Jane Longo is partner in Ropes & Gray’s litigation and enforcement practice group, where she focuses on SEC enforcement matters and the defense of securities and other class action cases.

Shannon Capone Kirk is the managing principal and global head of advanced e-discovery and AI strategy at Ropes & Gray.

Isaac Sommers, Lauren Brady, and Jake Barr contributed to this article.

Write for Us: Author Guidelines

To contact the editors responsible for this story: Max Thornberry at jthornberry@bloombergindustry.com; Melanie Cohen at mcohen@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.