AI Fake Citations Expose Lawyer Sloppiness and Training Gaps

June 23, 2025, 9:00 AM UTC

An onslaught of AI-generated hallucinations in court filings shows lawyers haven’t yet learned how to finesse their use of the rapidly changing technology, as the financial and reputational risks of citing fake cases is expected to climb.

Lawyers and ethics experts say solving the problem will require more training on generative AI and renewed attention to detail—i.e., checking their work. But the more hallucinations appear in court filings, the angrier judges will get and the more damage will be inflicted on the legal industry, FisherBroyles partner Anthony Davis said.

“Lawyers are going to be harmed, and clients are going to be harmed,” said Davis, who advises lawyers on professional responsibility. “When lawyers do this, they totally lose credibility with the court, which can only serve to the detriment of their clients.”

Bloomberg Law subscribers can sign up to get the Legal Ops & Tech newsletter in their inbox every week.

In a different world, AI-generated hallucinations in court filings would have stopped two years ago, when New York attorneys generated a month of headlines across the national media by citing fake cases. Since then, the American Bar Association has released guidance on AI use and the technology has been a featured topic at prominent legal conferences.

But the fake case citations aren’t stopping. The Court of Appeals for the Fifth District of Texas this month imposed a $2,500 sanction against an attorney who filed a brief citing multiple non-existent cases. Damien Charlotin, a Paris-based researcher on AI and the law, has compiled a list of at least 155 decisions in cases around the world involving hallucinated content.

‘Sloppy’ Lawyering

Davis said the best way for lawyers to avoid hallucinated citations is for them to be better lawyers. Fake cases pop up because lawyers don’t do a fundamental part of their jobs: reading the cases they cite.

“You must read the case, and these stories keep happening because lawyers aren’t doing the basic requirements,” he said.

When they cite fake briefs, lawyers violate the ABA’s Model Rules of Professional Conduct, which provides duties for competence and diligence, Davis said. The ABA’s rules also impose a duty to supervise, so attorneys overseeing the lawyers who draft these briefs are also falling short, he said.

The ABA last year reminded attorneys in a formal opinion that using generative AI tools could affect how they abide by its rules regarding competence, confidentiality, client communication, supervision, and candor toward the court.

“With the ever-evolving use of technology by lawyers and courts, lawyers must be vigilant in complying with the Rules of Professional Conduct to ensure that lawyers are adhering to their ethical responsibilities and that clients are protected,” the ABA said.

The judge in the Texas AI hallucination case found that the lawyer who submitted fake cases “violated basic duties of competence and candor as contemplated by the rules governing professional conduct.”

Worst of all, sloppy lawyers are harming their clients. In multiple cases, the AI-tarred filing has been withdrawn by lawyers or tossed by a judge.

Hallucinations are a technological flaw of gen AI, but their continued appearance in legal documents is a failure of lawyering, said Steven A. Delchin, a senior attorney at Squire Patton Boggs.

“It’s really exposing the sloppy lawyers who aren’t doing the job, or the supervising attorneys who aren’t making sure that the work product that they’re getting from junior lawyers has been fully cite-checked,” said Delchin, who has served on the ABA’s AI Ethics Working Group.

That sloppiness gets worse when lawyers are under time pressure or stressed out and looking for shortcuts, he said. But it’s not only lawyers and clients who have an incentive to beat back hallucinations.

“The judiciary has an institutional reason why they need to stop this AI hallucination trend, because they’re going to get inundated with these AI hallucination cases, and it’s going to take up too many of the court’s resources,” Delchin said.

Texas Supreme Court Chief Justice Jimmy Blacklock said his court should take a close look at the impact of AI and “whether we need to regulate or prohibit this is in some way to protect our justice system.”

Training and Education

Generative AI is a trendy topic in legal circles. But almost three years since the launch of ChatGPT, there’s still a lack of understanding around AI and how large language models produce outputs.

“The need for AI education in this country far surpasses what we have currently,” said Sarah Hammer, an adjunct professor at the University of Pennsylvania Carey Law School.

Training is only one piece of the puzzle. There’s a difference between hearing someone talk about AI while you’re answering emails during a conference panel, experts said, and actually using the tool. The pitfalls of AI use aren’t readily apparent until a lawyer uses the tool.

Big firms have a better chance of keeping their trainings on pace with the rapid advance of AI, Mark G. McCreary, chief artificial intelligence and information security officer at Fox Rothschild, said. Even then, those firms need their attorneys to pay attention to the trainings they offer, he said.

To prevent hallucinations, courts have begun reminding lawyers to check their AI-generated briefs for accuracy. About 40 U.S. courts have AI judicial standing orders. Requiring lawyers to certify that they’ve checked their briefs for AI-caused inaccuracies or mandating disclosure of AI use could ward off more hallucinated citations, some attorneys said.

Penalties so far have ranged from $1,000 to $5,000. In a California case last month, a judge issued a penalty of $31,000. Fines might climb as judges lose their patience, Davis said.

There’s one final factor that lawyers may need to unlearn if they’re going to use AI properly, McCreary said in an email.

“Some attorneys may just be too trusting,” he said. “After all, the chatbot sounded so confident, and lawyers are used to listening to confident voices that bill by the hour.”

To contact the reporter on this story: Evan Ochsner in Washington at eochsner@bloombergindustry.com

To contact the editors responsible for this story: Catalina Camia at ccamia@bloombergindustry.com; David Jolly at djolly@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.