AI Use in Law Practice Needs Common Sense, Not More Court Rules

Feb. 28, 2024, 9:30 AM UTC

In his 2023 year-end report on the federal judiciary, Chief Justice John Roberts highlighted the promise and perils of artificial intelligence for the legal system. “AI obviously has great potential to dramatically increase access to key information for lawyers and non-lawyers alike,” he wrote. “But just as obviously it risks invading privacy interests and dehumanizing the law.”

At this early stage, lawyers and judges have only a limited sense of how AI will affect the practice of law or the enterprise of judging. It would therefore be wise to pause and gather more data before taking action. But some judges are falling all over themselves to create AI-specific orders, rules, and disclosure requirements.

At least 21 federal trial judges have already issued standing orders regarding AI, according to Bloomberg Law. The Fifth Circuit is considering a proposal that would require attorneys to confirm that they checked the accuracy of AI-generated material. The Ninth Circuit has created an AI committee that could end up proposing AI-related rules, and so has the Third Circuit. State courts are convening AI committees as well.

I have a simple message for judges who are thinking about adopting AI-specific orders and rules.

Just. Say. No.

One can understand why judges have felt the need to take action. AI is dominating everything from newspaper headlines to cocktail-party chatter. There’s a reason why Roberts made it a focus of his annual report.

And stories about “AI fails” by lawyers have gone viral, such as the tale of two Manhattan lawyers who filed a brief in federal court that cited nonexistent cases generated—or “hallucinated,” to use the technical term—by ChatGPT. But do a few highly unusual fiascos, involving a small number of attorneys, justify imposing additional rules and requirements on all lawyers?

“ChatGPT, and AI in general, can be misused,” acknowledged Ross Guberman, CEO of BriefCatch, which produces legal-editing software that includes AI features. “But I’m concerned that some courts are issuing sweeping anti-AI rules based on anecdotes.”

I see several problems with judges saddling lawyers with AI-specific rules and requirements. First and foremost, they’re simply not necessary.

“Any such rules are redundant given lawyers’ existing responsibilities to ensure the accuracy of their court filings,” said lawyer and legal journalist Bob Ambrogi, publisher of the legal-technology blog LawSites. “Rule 11 in the federal courts and similar state rules require lawyers to certify the factual and legal accuracy of their pleadings. Professional responsibility rules impose similar requirements. To create a new ‘accuracy’ rule specifically related to the use of AI is unnecessary.”

AI is just another tool in a lawyer’s toolkit. If lawyers build something defective with their tools, that’s the fault of the lawyers—which is why existing rules target lawyers, not the specific tools they use.

“A lawyer’s use of AI to assist in drafting is no different than the use of an associate or of legal editing software,” Ambrogi told me. “No matter how the draft was prepared, the lawyer is ultimately responsible for its contents. When a lawyer submits a filing with fictitious or erroneous citations, the fault is the lawyer’s, not the technology’s.”

Or as lawyer and legal commentator Carolyn Elefant put it, “GenAI is nothing more than the canary in the coal mine. The toxic uses are all completely human.” Like a canary in a coal mine, an AI disaster points to another problem: incompetent or unethical lawyers. There’s nothing wrong with the canary.

“In every case so far of filings containing hallucinated cases, the fault was either in a lawyer who did not bother to even check the cases or a self-represented litigant who did not know better,” Ambrogi added. “This is simply bad lawyering—and bad lawyering that existing rules are more than sufficient to address, as sanction awards have already demonstrated.”

Exhibit A: the lawyers from the ChatGPT debacle. Judge Kevin Castel sanctioned them using good old-fashioned Rule 11—no fancy new AI rule needed.

Second, AI-targeting rules and requirements carry significant costs. In addition to forcing lawyers to spend (or waste) time and money on compliance, they send a negative message about AI that could discourage attorneys from exploring the many positive uses for AI.

As Elefant wrote of one judge’s standing order on AI, it “isn’t just duplicative, but dangerous.” Such orders have the potential to “stymie innovation and scare lawyers from using a powerful tool.”

New AI rules “might seem harmless, but lawyers take court pronouncements seriously,” Guberman of BriefCatch said. “The risk is that the entire profession will miss out on AI’s vast potential for enhancing both lawyering and access to justice.”

Third, rules aimed at AI create tension between clients and their outside law firms, according to Alex Su of Ironclad, an AI-powered contracts software company.

How so? Corporate legal departments are under pressure from CEOs and other top executives to leverage AI—which is why in-house lawyers are embracing AI much more quickly than law firms.

But when they encourage their outside counsel to use AI solutions for efficiency and cost savings, they often get pushback—partly because lawyers at firms, who deal more directly with courts and judges than their clients, are getting negative messaging about AI from the judiciary.

“Law firms are already conservative about AI, and understandably so,” Su told me. “But when courts impose all these new rules and disclosures, they push things a bit too far.”

Finally, if we must have AI-specific rules, it would be nice to have uniformity.

“While I do not believe any new rules are needed, I am also concerned about the Babel-like approach we’re seeing so far, of individual courts adopting their own rules,” Ambrogi said. “If there is to be rulemaking around this, it should be done in a uniform and deliberative manner.”

Lawyers having to comply with a welter of competing, inconsistent rules counsels in favor of individual judges holding off on AI-specific requirements for now. Instead, jurists concerned about AI should advocate within the judiciary for a broader, coordinated response.

As Roberts’ report made clear, AI isn’t going anywhere. There will be plenty of time to develop new rules if necessary, after lawyers and judges have a better sense of the actual problems and pitfalls.

In Roberts’ words, using AI “requires caution and humility.” And judicial efforts to regulate the use of AI by lawyers require caution and humility as well.

David Lat, a lawyer turned writer, publishes Original Jurisdiction. He founded Above the Law and Underneath Their Robes, and is author of the novel “Supreme Ambitions.”

Read More Exclusive Jurisdiction

To contact the editors responsible for this story: Alison Lake at alake@bloombergindustry.com; Jessie Kokrda Kamens at jkamens@bloomberglaw.com

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.