Lawyer Sanctioned $6,000 for AI-Generated Fake Legal Citations

May 29, 2025, 3:02 PM UTC

A federal judge slapped a $6,000 sanction on an attorney who filed briefs that included citations to nonexistent cases that were generated using artificial intelligence.

The lawyer, who’s representing an Indiana excavation company in a dispute with a multiemployer benefit fund, admitting to using generative AI to draft briefs that included “hallucination cites” to fictitious cases. Judge James Patrick Hanlon imposed a $6,000 sanction on him for this conduct, explaining that this amount balances the need to deter reckless attorney conduct with mitigating factors like the attorney’s efforts to educate himself on responsible AI use and the harms he’s already suffered to his professional reputation.

Fake legal citations generated by AI have become an increasingly large problem for judges to sift through, with one data analyst compiling a public database of 120 such incidents. These missteps have sometimes led to attorney sanctions: a Wyoming federal judge recently ordered attorneys involved in hallucinated filings to pay penalties ranging between $1,000 and $3,000, and a Texas federal judge last year assessed a penalty of $2,000 and imposed continuing legal education responsibilities on an attorney in similar circumstances.

Hanlon’s order, issued Wednesday in the US District Court for the Southern District of Indiana, partially adopts a federal magistrate judge’s recommendation on the matter. In that report, Magistrate Judge Mark J. Dinsmore said a higher sanction of $15,000 would be appropriate because prior sanctions assessed against attorneys who’ve made AI-related errors have “evidently failed to act as a deterrent” here.

Dinsmore had strong words for the attorney, Rafael Ramirez of Rio Hondo, Tex.

“It is one thing to use AI to assist with initial research, and even non-legal AI programs may provide a helpful 30,000-foot view,” Dinsmore said in the report. “It is an entirely different thing, however, to rely on the output of a generative AI program without verifying the current treatment or validity—or, indeed, the very existence—of the case presented. Confirming a case is good law is a basic, routine matter and something to be expected from a practicing attorney.”

The case is Mid Cent. Op. Eng’rs Health & Welfare Fund v. HoosierVac LLC, S.D. Ind., No. 2:24-cv-00326, 5/28/25.


To contact the reporter on this story: Jacklyn Wille in Washington at jwille@bloombergindustry.com

To contact the editor responsible for this story: Brian Flood at bflood@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.