Bloomberg Law
July 9, 2021, 8:00 AM

Reputation Management and the Growing Threat of Deepfakes

Carolyn  Pepper
Carolyn Pepper
Reed Smith
Peter Raymond
Peter Raymond
Reed Smith
Talia Fiano
Talia Fiano
Reed Smith

Deepfakes and “shallowfakes” are a growing direct threat to the accuracy of information in the digital environment and to individual reputations—threats that will only increase as technological innovation grows and creates more opportunities for their use.

Deepfakes can take the form of face reenactment (where software manipulates facial features), face generation (where a new face is created which does not relate to a specific individual), face swapping (where one person’s face is swapped with another), and speech synthesis (where voices are re-created). “Shallowfakes” are similar, but involve more basic editing techniques.

While court decisions concerning deepfakes are limited, previous artificial intelligence litigation and legal commentary suggest that court systems (particularly the U.S. court system, which is often where these issues are first tested) may soon be handling a flood of litigation involving deepfakes.

Although some deepfakes are obvious parodies (such as a 2020 deepfake of Richard Nixon announcing the failure of the 1969 Moon landing, or the use of deepfakes of Queen Elizabeth II by a U.K. television channel in a 2020 “Alternative Christmas Message”), their increasingly convincing nature means that this technology can be used for more troubling purposes—including damaging the reputation of public and private figures, and more widely spreading disinformation.

Deepfakes are commonly used to manipulate pornographic material (for example for revenge porn) and for political purposes. Their existence can also be relied upon by people who wish to deny that unaltered images or audio are genuine. Deepfakes and similar technology have significant implications for our court system—allegations were made in a U.K. child custody dispute last year that readily available technology had been used to edit an audio recording presented as evidence in court.

Potential Causes of Action to Protect Against Deepfakes

The larger question is whether the law can provide a solution to the reputational and other issues that are caused by deepfakes and other similar technologies.

Deepfakes appear to have first made an appearance in U.S. case law in 2019. (see In re S.K., discussing deepfakes in the context of child pornography). However, to date, there are no published court decisions in the U.S. or the U.K. which directly discuss whether, for instance, intellectual property rights can be used as a weapon against deepfakes.

Court decisions concerning deepfakes may be limited, but previous artificial intelligence litigation and legal commentary suggest that court systems (particularly the U.S. court system, which is often where these issues are first tested) may soon be handling a flood of litigation involving deepfakes.

The potential causes of action that could be brought in connection with deepfakes are numerous, including claims for privacy, defamation, right of publicity, and copyright infringement. In the U.K., passing off claims could also provide a solution and the criminal law could be used where, for example, the material constitutes revenge porn or where there has been harassment.

Privacy

Privacy claims appear to be a popular route for situations involving deepfakes, making up almost all currently pending U.S. cases. These claims are brought under state privacy statutes, with most claims being brought under the California or Illinois privacy statutes. This should come as no surprise as certain state privacy statutes can provide broad protections for individuals and entities.

Much of the currently pending U.S. deepfakes litigation surrounds claims relating to the use and storage of biometric personal data that it is suggested could be used to create deepfakes.

Defamation

A 2014 Western District of Virginia case provides some insight into how defamation claims (the traditional legal solution to reputation damage) could be brought in connection with deepfakes. The plaintiff brought a claim for defamation after the defendant distributed photographs identifying the plaintiff as a “porn star” and depicted the plaintiff (in an altered photo) in a sexually explicit manner.

The court denied the defendant’s motion for summary judgment on unrelated grounds, but discussed the difficulty in proving that an altered image was intended to be a statement of fact. This finding has been echoed by legal commentators, who have noted that while defamation seems like a natural cause of action for a deepfakes claim, such claims are unlikely to succeed in the U.S. given the requirement that the plaintiff show that the image or video was intended to be a statement of fact.

Despite the hurdle that plaintiffs face, claimants have begun bringing defamation claims in connection with deepfakes (or related artificial intelligence) in the U.S. For instance, in 2020, Donald J. Trump for President Inc. brought a defamation claim in connection with a statement that he argued was synthetically altered and was in a television advertisement broadcast by the defendant. And this year, plaintiffs brought a defamation claim against Ryan Seacrest Productions Inc., claiming that the defendants had created “cheap fakes,” using technology to make digitally created human faces, bodies, or voices for use in production scenes.

It seems likely that the U.K., which is often seen as a claimant-friendly jurisdiction for libel claims, will also start to see defamation claims arising from deepfakes.

Right of Publicity

A 2020 U.S. Court of Appeals for the Third Circuit caseinvolved artificial intelligence that seems almost indistinguishable from deepfakes. The plaintiff sued a videogame producer for the use of his likeness as one of the characters. The plaintiff’s complaint alleged that the defendant had used his likeness in violation of his right of publicity.

The defendant argued that its work enjoyed First Amendment protections. The court found in favor of the defendant, reasoning that the plaintiff had been “so transformed that it has become primarily the defendant’s own expression.”

This outcome suggests that plaintiffs will face an uphill battle in bringing right of publicity claims for works involving deepfakes, as they will have the burden to show the deepfake was not sufficiently transformed as to create a new work.

In the U.K., passing off is sometimes used in claims which would fall under the right of publicity in the U.S. There does seem to be a reasonable prospect of success for a claimant in the U.K. in cases where the deepfake is convincing enough to cause confusion.

Copyright

Commentators have opined that copyright claims involving deepfakes would be difficult to successfully prosecute in the U.S. because the fair use defense could readily be applied and because deepfakes are inherently an alteration of something else.

In the U.K., the more tightly defined fair dealing defenses may prove less helpful to defendants. As things stand, there is a lack of case law testing whether copyright claims will succeed in the deepfake space.

Much is still unknown about how successful various causes of action will be in the deepfake space and we will have to wait and see how courts decide to deal with this growing problem. In the meantime, a great deal of uncertainty exists as to how those affected by deepfakes can restore damaged reputations.

This column does not necessarily reflect the opinion of The Bureau of National Affairs, Inc. or its owners.

Write for Us: Author Guidelines

Author Information

Carolyn Pepper is a partner in Reed Smith’s London office. She focuses on commercial litigation and disputes, media and intellectual property matters, including social media, trade mark, copyright, breach of confidence, libel, privacy, and advertising.

Peter Raymond is a partner with Reed Smith in New York. He focuses on intellectual property and commercial litigation and represents companies on disputes related to false advertising, unfair competition, copyright and trademark infringement, trademark dilution, and invasion of rights of privacy and publicity.

Jonathan Andrews is an associate with Reed Smith in London. He advises clients on a range of transactional and litigation matters, from due diligence to copyright and contractual disputes, and advises on diversity and inclusion strategy.

Talia Fiano is an associate with Reed Smith in New York. She has worked on a wide range of complex matters in state and federal courts, including contract, labor and employment, trade secret, trademark and copyright disputes.