The growing prevalence of artificial intelligence hallucinations in filings by lawyers is driving a nationwide crisis of denied justice that underscores the need for mandatory reporting of AI related sanctions.
What initially appeared to be isolated or unwitting errors in fact have imposed substantial and systemic costs on the judiciary. Filings containing fabricated authorities consume scarce judicial resources, delay the resolution of meritorious cases, and undermine confidence in the integrity of the adjudicative process.
Publicly reported cases involving AI-generated hallucinations now exceed 550 nationwide, according to informal public tracking. But the true number is unknown and likely much higher.
These incidents span district courts and courts of appeals and cut across an expanding range of practice areas. Despite the scope and persistence of the problem, there remains no centralized, judiciary-wide mechanism for tracking sanctions or remedial measures imposed in response to AI misuse.
Amending the law to require mandatory reporting of AI-related sanctions would provide clarity, support judicial administration, and promote accountability, without intruding on judicial independence.
Dearth of Data
The absence of coordinated data collection has left courts without the information necessary to respond in a consistent or effective manner. The result is a fragmented and uneven judicial response that has thus far failed to meaningfully deter or arrest the trend.
That gap matters for judicial administration, court resources, and public confidence in the integrity of the legal profession and federal judiciary.
Congress should address it directly by requiring mandatory reporting of AI-related sanctions and fee awards by the Administrative Office of the US Courts.
Early AI hallucination cases were treated as curiosities and anomalies. A lawyer submits a brief citing non-existent authority, the court issues a sharply worded order and sanctions, and the episode circulates briefly on legal social media before fading away from the public view.
That approach no longer works.
With the instances in the hundreds, courts are now spending meaningful time and resources identifying fabricated authorities, issuing show-cause orders, conducting hearings, and drafting sanctions opinions. Clerks are fielding follow-up motions, and offending lawyers are now appealing their sanctions, further burdening court resources.
Judges are responding with standing orders, disclosure requirements, and individualized enforcement mechanisms. All of this consumes judicial resources that are already under strain.
The federal judiciary currently lacks any standardized data showing how often AI misuse leads to sanctions, the nature of the misuse, and any trends across courts or practice areas. The information exists only in scattered opinions, local orders, and anecdotal reporting.
This lack of transparency is not inevitable. Congress has already recognized, in another context, that sanctions data is important enough to warrant mandatory reporting.
Bankruptcy Model
The director of the Administrative Office of the Courts is required to collect and annually publish statistics on sanctions imposed and damages awarded under Federal Rules of Bankruptcy Procedure 9011 against debtor’s counsel. That reporting requirement has existed for years. It has not interfered with judicial independence, expanded sanctioning authority, or chilled legitimate advocacy. It has simply provided visibility into how sanctions are being used.
The logic applies with equal, if not greater, force to AI-related misconduct across the federal judiciary. Anecdotally and in informal databases, the problem has been tracked, but the true numbers are not clear. Absent a centralized and reliable database collection, we cannot assess how broadly this issue is straining our courts.
Given that AI-hallucinated filings could span all civil practice areas, the proper statutory vehicle for reform is the US code (28 U.S.C. § 476) which governs the enhancement of judicial information collection and dissemination. This section already requires the director of the Administrative Office to publish standardized, public reports on judicial activity, using uniform categorization standards established under the US code dealing with public access to case information (28 U.S.C. § 481).
An amendment to include AI-related sanctions reporting would be structurally coherent and institutionally familiar. It would not regulate AI use, mandate sanctions, or alter substantive law. It would simply require the judiciary to report, in aggregate form, on sanctions that judges have already imposed. The absence of centralized and reliable reporting obscures trends that matter.
Are AI hallucination cases concentrated in certain jurisdictions or practice areas? Are judicial standing orders mandating AI disclosures actually deterring such AI-hallucinated filings? Do larger sanctions deter offenders, or should bar referrals and disbarment become the operative deterrent? Should courts experiment with new judicial rules on AI, like the Hyperlink Rule, to compel attorneys’ compliance with accurate cite-checking?
Without data, neither Congress nor the judiciary can answer these questions in a meaningful way.
Better Rulemaking
Mandatory reporting would also support better rulemaking. Judicial councils and committees cannot calibrate standing orders, disclosure requirements, or training initiatives without knowing the scope of the problem they are trying to address.
A reporting requirement is the narrowest possible intervention. It does not expand judicial power, mandate enforcement, or stigmatize proper AI use. It simply brings visibility to misconduct that courts have already determined warrants sanction.
AI is now a permanent feature of legal practice. And so are the risks associated with its misuse. Congress has already acknowledged, in the bankruptcy context, that sanctions data is worth tracking. Extending that logic to AI-related sanctions across the federal judiciary is both modest and overdue.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners.
Author Information
Oliver Roberts is an adjunct professor of law at Washington University in St. Louis School of Law, co-head of Holtzman Vogel’s AI practice group, and founder and CEO of Wickard.ai.
Write for Us: Author Guidelines
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.