Videotaped depositions have long been a routine feature of modern litigation—efficient, reliable, and strategically useful for impeachment and settlement leverage. Video testimony has become especially common for high-profile executives and public officials.
What was once considered a routine evidentiary record, however, has the potential to become something else: high-quality source material for increasingly accessible generative AI systems.
Artificial intelligence has made deepfakes—convincingly realistic fabricated videos—faster, cheaper, and more accessible than ever.
Only a few years ago, creating a credible deepfake required substantial audio and video footage. That threshold has dropped steadily. Researchers and developers have demonstrated that increasingly minimal samples—sometimes just seconds of audio and a single image—can be sufficient to generate persuasive results. Widely available and low-cost tools have made this capability available to virtually anyone.
Why depositions are valuable. Deepfake systems are trained on authentic audio and video. The more controlled and high-quality the source material, the more convincing the output. Videotaped depositions offer nearly ideal conditions: clear, uninterrupted footage of a subject’s face; professional-grade resolution; controlled lighting; minimal background noise; and extended, articulate speech. Witnesses are instructed to speak clearly and answer fully, precisely the conditions that make AI training data most effective.
The setting itself compounds the risk. A CEO answering questions at a conference table, visibly under scrutiny, already carries an implicit narrative of accountability. A manipulated version of that footage, such as editing to suggest an admission of wrongdoing or a damaging disclosure, would arrive pre-loaded with contextual credibility.
The reputational and financial stakes are substantial, and they are not merely hypothetical. In 2024, a Hong Kong finance worker was duped into transferring $24 million in a videocall in which the company’s chief financial officer and several other company employees—all deepfakes—appeared to authorize the transaction.
A fabricated video appearing to show an executive acknowledging fraud or a public official making inflammatory remarks can spread globally in minutes. Automated trading systems can trigger irreversible effects before any correction reaches the same audience. The threat extends to extortion: Bad actors may demand payment to suppress fabricated footage, and the existence of authentic deposition recordings lowers both the technical barrier and the plausibility threshold for such schemes.
The Liar’s Dividend. There is a secondary threat that deserves equal attention. Beyond fabricating false statements, bad actors—including state-level adversaries—can flood the information environment with enough synthetic content that authentic recordings become suspect. Scholars call this the “Liar’s Dividend": Real evidence gets dismissed as fake, promoting disengagement and eroding institutional trust. Courts will need to continue refining evidentiary authentication standards to preserve confidence in recorded testimony.
None of this suggests that deposition testimony is inherently unreliable. Courts have long navigated evolving technologies, from digital photographs to body-camera footage. through authentication rules and expert testimony. The challenge isn’t abandonment of video evidence, but modernization of its stewardship.
Traditional protective orders may be insufficient. Existing legal protections were designed for a pre-AI era. Protective orders restrict disclosure and reproduction, but they don’t account for the possibility that a recording could be algorithmically transformed into fabricated content. They do nothing once a file is leaked, stolen, or improperly accessed—and deposition recordings may be stored on third-party platforms, transmitted via unsecured file-sharing links, and downloaded to personal devices with no retention policy.
Counsel should treat video deposition files as sensitive digital assets: stored on encrypted platforms, distributed only to essential personnel, with access logs and clear destruction timelines once litigation concludes. Modern protective orders should explicitly prohibit AI-generated manipulation and restrict use of footage as training data.
For some high-profile witnesses, the calculus around video depositions may need revisiting entirely. In rare, high-risk contexts, counsel may weigh whether the marginal strategic value of video testimony justifies the additional digital exposure—a judgment that will depend heavily on the specific facts, forum, and the nature of the witness.
Technical countermeasures. Some vendors are experimenting with imperceptible watermarking or signal-layer interventions designed to preserve authenticity and deter misuse, though these tools must be balanced against evidentiary clarity and admissibility concerns.
On the authentication side, the Coalition for Content Provenance and Authenticity, whose members include Adobe Inc., BBC, Google, Meta Platforms Inc., Microsoft Corp., and OpenAI, is developing content credentials that allow good-faith actors to cryptographically verify the authenticity of video. Platforms including YouTube and Meta are in early stages of deploying flags that identify and surface authentic content.
Pin Drop, which has received significant investment from former Cisco CEO John Chambers through his fund JC2 Ventures, focuses specifically on preventing deepfakes in voice communications. Major financial institutions have reportedly been cautious about voice authentication given these vulnerabilities—a prescient posture that litigation practice has yet to fully absorb.
Securing the record. The legal profession’s duty of technological competence extends to this evolving threat. Courts, bar associations, and organizations such as the Sedona Conference are actively wresting with authenticity issues posed by AI, including deepfakes, but to date, those efforts have focused largely on preventing the injection of deepfakes into court proceedings.
Yet the converse scenario also deserves attention: judicial proceedings generating raw material for deepfakes to be exploited outside the courtroom. Consideration of reasonable protective steps for high-profile video testimony is an area deserving of professional scrutiny.
Practically, this may mean: limiting video depositions to cases where they are genuinely necessary; hardening digital custody of recordings; updating protective order language to address synthetic media; and preparing crisis response protocols on the assumption that a fabricated video could surface.
The record was once the safeguard. In the age of generative AI, it must also be secured.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners.
Author Information
Sabrina Rose-Smith is a partner in Goodwin’s financial services litigation and consumer financial services litigation practices.
Elizabeth Tucci is counsel at Goodwin, where she focuses her practice on government investigations, enforcement matters, and regulatory strategy.
Write for Us: Author Guidelines
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.
