The most interesting artificial intelligence-related development in litigation right now has nothing to do with hallucinated case law or obviously flawed legal citations. It’s the growing number of filings that look polished on the surface but signal that no lawyer has engaged with the record or legal research.
The citations are real. The cases exist. The language is confident. Yet the analytical reasoning—or lack of it—reveals that the arguments were assembled by an AI tool rather than developed through a trained advocate’s independent judgment.
This shift matters to judges who rely on counsel to surface real analysis grounded in the record. It matters to clients who hire lawyers for judgment and legal insight, not formatting. And it should matter to members of a profession built on duties of competence, candor, and accountability.
An AI-drafted brief isn’t merely lazy. When filed without meaningful attorney involvement, it undermines the very premise of qualified legal representation.
AI Misuse
The problem with AI misuse isn’t hypothetical.
Our office received an opposition brief that referred to an attorney declaration that didn’t exist and cited arguments drawn from a privileged, client-facing memo. The case law was real, and the citations checked out.
But the arguments were based on assumptions and unproven facts and bore no connection to the actual record. This wasn’t written by a lawyer who understood the case. It was drafted by an AI tool that was given instructions and little human oversight.
In another matter, opposing counsel’s filings became more polished and, for the first time, arrived on time. At first glance, that looked like progress for the opposing party.
On closer inspection, however, every authority appeared by name only. There were no proper citations, no pin cites, and no page or line references to support the key arguments. It’s clear that the authorities all came from our own prior briefing, mirrored back to us with the opposite conclusion asserted. The tool had copied our form of advocacy without its substance.
More egregiously, a seemingly well-drafted lawsuit was filed against one of our clients alleging multiple contractual breaches. None of the quoted provisions appeared anywhere in the contract at issue. The alleged obligations were an amalgam stitched together from other lawsuits the AI tool had ingested and repurposed to fit the prompt.
We recently received a proposed stipulation that was generated by AI. It recited general statements about the case, likely pulled from the complaint, but lacked the specific details required by the judge’s procedures. This was particularly egregious because the proposed stipulation arrived after direct conversations with opposing counsel discussing our concerns about the need to address these exact court requirements. The stipulation ignored all of that.
In each of these examples, a licensed lawyer signed and filed or, at a minimum, served the document on opposing counsel. On paper, the formalities were satisfied. The deeper problem was authenticity. These documents didn’t reflect a lawyer’s own analysis, judgment, or understanding of the case. They were the work of a bot concealed by a bar number.
The Problem
Courts can address legal hallucinations by searching whether a case exists. The impersonation problem is different. It’s much harder to prove that a licensed lawyer lacked input into the work product they submitted.
That difficulty places professional responsibility on a collision course with new technology. Litigation requires strategic choices: which facts matter, which risks are worth taking, what to concede, when to fight, how your opponent may respond and how you will reply.
This isn’t a new ethical problem. It’s an old one wearing modern clothes. AI-generated work without adequate review implicates the duty of competence, the duty of candor, the duty of supervision, and the duty to exercise independent legal judgment on behalf of the client.
A Useful Tool
None of this means lawyers should avoid AI. When used responsibly, AI can be an exceedingly useful tool. As the technology evolves, AI can be integrated into a litigator’s toolkit as online research databases became two decades ago.
AI excels at catching typos, summarizing voluminous documents, and helping locate analogous authority within a defined universe of real case law.
But the available AI tools all currently have limits. AI doesn’t truly understand a record. It doesn’t weigh credibility. And it shouldn’t be permitted to substitute its output for the professional judgment of a licensed attorney.
AI can assist. It can’t be the lawyer.
Responding to AI
There are practical steps that keep the focus where it belongs: on good lawyering.
- Treat authenticity as a core issue. When a filing feels generic, disconnected from the actual record, or strangely divorced from prior conversations, don’t assume the lawyer was just rushed.
- Test the work product at the seams. AI-generated documents often fail at the points where real cases and facts meet.
- Use discovery and disclosures to surface misuse. When a filing cites to AI-drafted or non-existent material, serve targeted discovery requests that potentially expose where a bot was asked to fill in for a human.
- Reserve sanctions for true abuse. Requests for sanctions should be reserved for clear, provable patterns of carelessness or deception. Judges don’t want to be full-time AI monitors, but they will respond when it’s clear that a lawyer has abdicated their responsibilities.
Clients hire experienced litigators because they want someone who knows the rules, the record, and can stand up in a courtroom and respond in real time.
No tool can capture the judgment required to pivot when a witness says something unexpected, when a judge poses a hard question, or when a case takes a sudden turn.
The path for lawyers who care about their craft is straightforward. Use AI to make your work sharper, cleaner, and more efficient when appropriate, with client consent.
If a lawyer chooses to let a bot do their lawyering, that may be a problem for their clients and cases. An authentic, engaged advocate who understands the facts and the law will always be the better choice than a chatbot dressed up as counsel.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners.
Author Information
Jodi K. Swick is founding partner of McDowell Hetherington’s California office and a highly regarded trial lawyer.
Jarrett E. Ganer is partner at McDowell Hetherington, who focuses on complex commercial litigation.
Write for Us: Author Guidelines
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.

