ANALYSIS: Sanctions for Fake Generative AI Cites Harm Clients

April 3, 2024, 9:00 AM UTC

Courts are grappling with what sanctions they should impose when a party submits pleadings with fake, AI-generated cases. Should they strike the offending pleadings—which has the greatest consequence to a client—or should they sanction only the submitting attorney?

If the offense is the hallucination and fabrication of case law, and not the use of AI itself, a recent Ninth Circuit case offers a cautionary warning of what courts could do.

Fabrication in the Ninth

The Ninth Circuit in March struck an opening appellate brief that referenced fabricated case law and misrepresented facts and case holdings. Although it’s not clear from the docket or oral argument whether generative AI was used to gather the fictitious cases or prepare the brief, the case resembles others where attorneys cited cases fabricated by ChatGPT.

Before oral argument, counsel for appellants was ordered by the Ninth Circuit to be prepared to discuss the two cases cited in her brief. The court questioned the lawyer about the two cases, saying that the cases appeared fabricated. Regarding one of the cases, the lawyer said that it was cited incorrectly and didn’t apply, and so she wouldn’t rely on it. When asked about the other case, she simply said that it “would have to be distinguished.”

Noting that counsel didn’t acknowledge the fabrication or provide any “meaningful support” for her clients’ claims during oral argument, the court struck the opening brief and dismissed the appeal. The court left in place the summary judgment granted by the lower court in the other side’s favor.

Notably, the court didn’t specifically ask counsel during oral argument if generative AI was used in preparing the brief.

As similar accounts of fabricated case law accumulate with the ever-increasing use of generative AI, the legal community may be wondering how best to handle these submissions and the attorneys who haphazardly prepare them. Should clients suffer the consequences of their counsel’s failures by losing their right to litigate—whether it is to pursue a case, assert a defense, file a dispositive motion, or appeal an adverse decision?

Who should be held responsible for the carelessly prepared court filings?

Clients in the Dark

While some clients may review pleadings and briefs before they’re filed, they rarely, if ever, check the citations to make sure that the cases actually exist and stand for the proposition asserted. That’s counsel’s job—at least that’s what they’re paid to do. They’re the ones who went to law school, where they hopefully learned how to research and cite (real) cases to support their arguments.

Not to mention, clients may not even know if their attorneys are using generative AI or have the wherewithal to tell their attorneys not to use it. And they may not necessarily want to issue such an instruction, since in some cases AI has the potential to reduce litigation costs—when used wisely.

It seems unfair to hold clients responsible for their attorney’s misconduct. By contrast, if a client fails to produce relevant discovery, despite counsel’s best efforts, sanctions against that client may be more appropriate. Those sanctions could include an order precluding the litigation of certain issues, an order striking parts or all of the pleading, default judgment, or dismissal of an action or affirmative defense.

How to Proceed

There are other ways to reprimand attorneys for their inappropriate use of generative AI that don’t jeopardize the rights of their unsuspecting clients. The court could admonish the attorney, impose monetary sanctions, or refer their conduct to the appropriate bar or disciplinary committee.

For example, in the infamous Mata v. Avianca, Inc. case, the court imposed Rule 11 sanctions against the two New York plaintiffs’ attorneys who cited AI-fabricated case law without striking the associated pleading. The plaintiff wasn’t afforded a do-over in their opposition, however, and the court ultimately decided the motion to dismiss in the defendant’s favor. It’s not clear whether the outcome would have been different had the plaintiff filed a more robust opposition, but at least they didn’t lose their opportunity to be heard.

Some courts have taken measures to prevent the use of generative AI and the appearance of fabricated case citations in court filings. Other courts’ standing orders direct attorneys to certify their use of generative AI and confirm the accuracy of the research.

Unless courts decide that parties shouldn’t bear the brunt of the sanctions for their counsel’s use of fabricated cases, attorneys should be very careful with their use of generative AI. One misstep may mean a call to their malpractice insurer.

Bloomberg Law subscribers can find related content on our Litigation Intelligence Center and In Focus: Artificial Intelligence pages.

If you’re reading this on the Bloomberg Terminal, please run BLAW OUT <GO> to access the hyperlinked content, or click here to view the web version of this article.

To contact the reporter on this story: Golriz Chrostowski in Arlington, VA at gchrostowski@bloombergindustry.com

To contact the editor responsible for this story: Melissa Heelan at mstanzione@bloomberglaw.com

Learn more about Bloomberg Law or Log In to keep reading:

See Breaking News in Context

Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.

Already a subscriber?

Log in to keep reading or access research tools and resources.