Proposed AI Evidentiary Rules Punted Due to Lack of Consensus

May 7, 2026, 5:04 PM UTC

The committee responsible for revising and updating federal evidentiary rules can’t agree on whether to proceed with proposals to address machine-generated evidence and deepfakes, never mind the best approach.

After a couple of hours of debate Thursday, the Advisory Committee on Evidence Rules decided to table the proposed amendments until its meeting next fall, when the panel can bring in tech experts and litigators to do a deep dive on challenges presented by artificial intelligence.

The move means delaying the process for another year, but that’s not a high price to ensure the committee is doing the right thing, said Southern District of New York Judge Jesse M. Furman, who took over as chair of the committee in 2024.

The Justice Department’s representative on the committee, Elizabeth Shapiro, supported the idea of gathering more input from stakeholders, particularly on the criminal side. DOJ’s discomfort is with how the proposals might play out in criminal cases, she said.

The committee has been considering how to deal with the challenges posed by AI, including how to address authenticating hard-to-detect audio and visual fakes, since 2023.

Under one of the proposed rules, machine-generated content would be subject to a four-part reliability test similar to the one employed for screening expert witnesses in federal cases. That rule would come into play when AI performs the analytical work usually reserved for human experts.

The other potential amendment would require someone putting forward evidence challenged as possibly AI-fabricated to prove the material is more likely than not authentic. The person challenging the evidence would need to show that an inquiry is justified, at which point the burden would shift to the proponent of the evidence to prove it’s authenticity.

The Innocence Project and other outside stakeholders have urged the committee to proceed. And Fordham University School of Law professor and reporter for the committee, Daniel Capra, cited examples of cases where AI evidence has presented problems for courts.

But attitudes about how quickly the committee needs to act—and whether there’s an immediate problem to solve—varied significantly.

“To not be vigilant is to make a mistake,” committee member James P. Cooney III said, citing “extreme urgency.”

Other committee members weren’t so convinced the issue requires immediate action, saying existing rules are working well enough to address AI.

Laws on obstruction and contempt do a pretty good job of keeping fabricated evidence out of courtrooms, and courts have been dealing with forgeries forever, Second Circuit Judge Richard J. Sullivan said.

“I’m not sure that deepfakes are that unique,” he said.

To contact the reporter on this story: Holly Barker in Washington at hbarker@bloombergindustry.com

To contact the editor responsible for this story: Ellen M. Gilmer at egilmer@bloomberglaw.com

Learn more about Bloomberg Law or Log In to keep reading:

See Breaking News in Context

Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.

Already a subscriber?

Log in to keep reading or access research tools and resources.