NY Federal Judge Questions if Avoiding AI Could Be Malpractice

Jan. 13, 2026, 10:00 PM UTC

A Manhattan district court judge who chairs a committee on evidence rules questioned if attorneys failing to use AI could become legal malpractice, underscoring the fast-moving debate around use of the technology in courtrooms.

“I heard somebody say employers are risking malpractice by relying too much on AI,” said Judge Jesse Furman on Tuesday at a New York State Bar Association panel discussion on artificial intelligence. “I think there may come a point where it’s the opposite—where you’re committing malpractice if you don’t incorporate AI into your practice.”

Furman, of the US District Court for the Southern District of New York, said he could “imagine and foresee a day” where an attorney avoiding AI leads to a fee dispute, for example.

Fees for “thousands of hours” could be contested by someone arguing they are “unreasonable, because all these tasks that were done by a lawyer could have been done in 30 seconds by an AI tool,” Furman said.

The comments come as federal and state courts implement a patchwork of rules around AI with varying degrees of permissiveness amid a debate over the issue.

As chair of the judiciary’s Advisory Committee on Evidence Rules, Furman helped craft a proposal addressing the reliability of machine-generated evidence. That rule is now in a public comment period.

Furman on Tuesday also expressed concerns about fellow judges being too cautious on AI use.

“We’re going to talk about the risks of using it, from hallucinations to bias to confidentiality, and those are real,” he said. “And I think for that reason, a lot of people, certainly a lot of judges, are taking a wait-and-see approach, or even sticking their heads in the sand. But I think there will come a point where these tools are so powerful that to not use them is itself a problem.”

Furman’s fellow panelist, criminal defender Barry Scheck, expressed concern that AI isn’t being more carefully monitored as it infiltrates the profession. He pointed to California prosecutors who filed arguments for jailing a defendant later alleged to have AI-generated errors in them.

“This is serious business,” he said. “People in large practices are being hauled in front of disciplinary committees,” for AI-error-ridden filings, and more should be disclosed on which tools are causing errors and how they happened, he said. “In science, we call this remediation.”

To contact the reporter on this story: Mike Vilensky at mvilensky@bloombergindustry.com

To contact the editor responsible for this story: Sei Chong at schong@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

See Breaking News in Context

Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.

Already a subscriber?

Log in to keep reading or access research tools and resources.