- DLA Piper attorneys say federal rules must address AI issues
- Evidence authentication should account for deepfakes
A patient walks into a clinic with an unspecified malady. The patient provides a tiny sample of blood, answers some questions, then artificial intelligence generates a medication regimen tailored to the patient’s unique metabolic profile. Or a patient’s incipient tumor, otherwise undetectable by human pathologists, is diagnosed by AI image processing tools.
This isn’t far-fetched science fiction. Artificial intelligence is increasingly pervasive across various sectors, aiding in information gathering, analyses, predictions, and content generation. Consequently, AI generated outputs will increasingly become substantive evidence in litigation. Their admissibility, however, may raise novel issues under the Federal Rules of Evidence: particularly the authenticity of AI-generated evidence, and whether such evidence, if offered for the truth of the matter asserted, is hearsay under the FRE.
Authenticity
Introducing AI-generated evidence presents significant challenges for authenticity. Authentication requires that the proponent proffer sufficient evidence to find that the evidence is what the proponent claims it to be under Rule 901. Rule 901(b) provides examples of authenticating evidence that meet this standard, including testimony of a witness with knowledge, or evidence describing a process or system showing that it produces “an accurate result.”
Outputs from generative AI may not prove easy to authenticate precisely because it generates outputs independently, and it’s often not clear how the content was generated, even to an expert. That is what led at least one New York state court to hold that prior to admission of AI-generated evidence and “due to the nature of the rapid evolution of artificial intelligence and its inherent reliability issues,” a hearing should be held to test the reliability of any outputs. (Generative AI is a type of artificial intelligence capable of generating new content in response to a submitted prompt by learning from a large reference database of examples).
The importance of establishing the reliability of AI-generated evidence has also caught the attention of the US Courts Advisory Committee on the FRE. Recognizing the unique nature of AI and the new challenges it poses for the evidentiary system, the committee offered proposed amendments to the FRE to address authenticity and the unique problems stemming from deepfakes. The committee proposed expanding Rule 901(b)(9) to require proponents of AI-generated outputs to produce evidence that the outputs are “reliable”—in contrast to “accurate,” the current term. The proponent would have to additionally produce evidence that “describes” the training data and software or program, and to show that the AI system produced reliable results “in this instance.”
The proposed amendments also address the growing concern for deepfakes. The committee formulated a two-step burden shifting test. First, the objecting party must establish that a jury could reasonably find the evidence manipulated. If successful, the evidence becomes admissible only if the proponent shows that the evidence is, more likely than not, authentic. The committee recognized that, given the pervasiveness of claims of deepfakes, the party objecting must show that a jury could reasonably find that evidence has been altered. In turn, if such a showing has been made, the proponent of the evidence would need to show it more likely than not authentic.
While proposing these rules on authenticity, the committee recognized there may be some tension when injecting concepts of reliability into questions of authenticity: there may be times when a proponent intends to offer unreliable evidence. Authenticity is only intended to determine whether the evidence is what the proponent says it is, separate from reliability. Some on the committee favored treating the unique admissibility issues of AI-generated evidence to align with the rules governing the admissibility of expert evidence rather than the rules governing authenticity.
Thus, a new rule was proposed, Rule 707, which would subject AI-generated outputs to the same requirements as the admissibility of expert testimony, governed by Rule 702. To ensure Rule 702’s requirements are met, the committee contemplates that the courts would examine the inputs used by the AI system, guarantee that the objecting party has adequate access to the AI system to assess its functionality, and determine whether the process has been validated under sufficiently similar circumstances.
Hearsay
Although AI-generated outputs may face reliability challenges because they are machine-generated, this same characteristic helps machine results overcome hearsay objections. That is because the hearsay rule and its exceptions imply a human declarant, whereas an AI statement has no such declarant. For example, in United States v. Washington, statements made by diagnostic machines were not considered “out-of-court statements made by declarants” and “[otherwise] subject to the Confrontation Clause” in the criminal context. The Fourth Circuit further found that “[o]nly a person may be a declarant and make a statement,” and outputs from machines are not hearsay.
Similarly, in United States v. Channon, machine-generated transaction records were deemed outside the parameters of Rule 801. A New Mexico court also held that because the “programs make the relevant assertions, without any intervention or modification by a person using the software,” hearsay rules don’t apply. Thus, so long as the output is AI-generated and lacks human intervention, litigants seeking to offer the output in evidence, even for the truth of the matter asserted, are likely to overcome hearsay objections.
As AI technology continues to evolve, so too will the legal frameworks governing admissibility in court. Understanding the nuances of authentication and reliability as well as hearsay will be essential when dealing with AI-generated evidence. Given AI’s complexity, its outputs may increasingly be subject to standards that mirror expert testimony but remain outside the reach of any hearsay objections. Litigants should also stay up to date on the development of AI technology, the case law, and the evolving rules of evidence to competently litigate the admissibility of AI-derived outputs.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.
Author Information
Allen Waxman, of counsel at DLA Piper, has been a trial lawyer, national counsel in mass tort cases, former general counsel and head of litigation at Pfizer, and CEO of a dispute resolution organization.
Jason Kort, litigation associate at DLA Piper, is a trial lawyer who defends companies in complex litigation including mass torts and commercial disputes.
Marcelo Barros, an associate at DLA Piper, is a patent litigator who represents technology, medical device companies, and life sciences clients.
Write for Us: Author Guidelines
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
Learn About Bloomberg Law
AI-powered legal analytics, workflow tools and premium legal & business news.
Already a subscriber?
Log in to keep reading or access research tools.