A viral photo of Pope Francis walking outside the Vatican in a white puffer jacket was circulating on the internet in March 2023. It was a youthful and stylish look for the pontiff that immediately changed my impression of him.
Within hours, news sources reported that the photo was fake and created using generative artificial intelligence. My initial reaction was disappointment and amusement. With the passage of time, I now view that photo as an inflection point.
Photos created with generative AI have only improved since then, and most people, including experts, struggle to determine if a photo is real or fake. And there’s no longer any significant barrier to entry. Deepfakes can be created within seconds by anyone with a smartphone.
With the recent launch of AI-based video creation apps such as Sora by OpenAI, we will soon be flooded with videos and photos that have no foundation in the physical world. Everyone now wields the power to destroy another person’s reputation or manipulate an audience.
The deluge of deepfakes could reset our default instincts to assume that every video and photo is fake. The truth will be impossible to discern unless drastic action is taken. It’s to criminalize the knowing dissemination of deepfakes.
Poor Legal Remedies
At present, the best remedy for a victim of most types of deepfakes is to file a civil action under the right of publicity recognized in certain states that enables a person to recover damages for the defendant’s unauthorized use of the person’s likeness.
However, civil suits are grossly inadequate in this situation. It often isn’t clear who the harmed party should sue, and even when they bring a case, damages are difficult to adjudicate, and cash payments are hardly enough to repair ruined reputations.
The harm typically is inflicted not only by the individual who created the deepfake but by every person who disseminates the image through social media. Damages can be difficult to quantify. Traditionally, when monetary damages are inadequate, injunctive relief usually provides an effective remedy, but in this situation, injunctive relief also will be inadequate because preventing the initial perpetrator from continuing the act won’t prevent others from disseminating the deepfake.
Furthermore, by the time a civil case goes to trial (at best, 18 months after the complaint is filed), there’s no longer interest in the deepfake, or in sharing it. I haven’t seen anyone share the photo of the Pope in the puffer jacket photo in quite some time.
Earlier this year, the federal Take it Down Act was passed by Congress and signed into law by President Donald Trump. The act criminalizes the publication of deepfakes involving intimate imagery. It also requires websites and social media companies to remove such images at the request from the victim within 48 hours. It addresses what is perhaps the most harmful type of deepfake, but it doesn’t address the harm posed by deepfakes that don’t involve intimate imagery.
Better Legislative Option
I propose a federal law that makes it a crime to knowingly disseminate a photo or video containing a likeness of a real person’s image or an audio clip containing a likeness of a real person’s voice that wasn’t captured from the physical world without the person’s consent or a disclaimer that the content was generated by artificial means.
A federal law, as opposed to state laws, is needed to ensure uniformity among the states and because states have exhibited varying levels of interest in regulating deepfakes. A criminal statute is needed because it would provide a level of deterrence that civil statutes simply can’t provide.
This proposed statute wouldn’t criminalize the creation and dissemination of photos such as one of the Pope wearing puffer jacket—as long as a disclaimer is included with the photo (e.g., an embedded caption that reads, “Created by AI”). Disclaimers are easy to remove; however, anyone who removes the disclaimer and disseminates the result would then be liable under the statute.
There are First Amendment questions to consider, but the knowing dissemination of a deepfake is akin to defamation, which the US Supreme Court has ruled can be curtailed in certain situations without running afoul of the First Amendment.
A significant change in the law is needed to protect the reputations of the many people who will otherwise become the unwitting subjects of deepfakes and to prevent the truth from being drowned out by a sea of falsity.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners.
Author Information
Brent K. Yamashita is a Shareholder in Vedder Price’s San Francisco office and a member of the firm’s Intellectual Property group.
Write for Us: Author Guidelines
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.
