YouTube is trying to get ahead of deepfakes amid persistent uncertainty over the extent of online platforms’ liability for maintaining damaging and potentially illegal content.
The
The initiative dovetails with congressional efforts such as the NO FAKES Act—a bipartisan, bicameral bill backed by YouTube—to combat deepfakes’ capacity to amplify deception and exploitation without also censoring constitutionally protected parody and satire.
The legal landscape those efforts create will affect how vulnerable potential deepfake targets are and will implicate free speech principles, just as Cox Communications Inc.'s March US Supreme Court win did for internet service providers in the the copyright liability context.
Social media platforms often end up as de facto arbiters of various rights in massive public digital spaces when legislation lacks definition or litigation is impractical. So far there’s no law directly addressing non-pornographic deepfakes, unlike copyright—another area YouTube tackled with an automated flagging tool, Mark S. Lee of Rimon PC said.
“YouTube was heavily encouraged to create its copyright tool by the Digital Millennium Copyright Act” safe harbor for platforms that take certain steps to fight infringement, he said. “There’s no such legal encouragement for the detection tool” for deepfakes, he said, so YouTube “will have to do some work and call balls and strikes when claims are made.”
The lack of clarity puts online platforms in a bind as they struggle to contain harmful AI videos. Gray areas and obsolete legal frameworks leave gaps for deepfakers to exploit, as lawsuits can be impractical if damages or deepfakers’ assets are too low or otherwise out of reach.
Meanwhile, any takedown system risks weaponization against criticism or parody. Electronic Frontier Foundation legal director Corynne McSherry said YouTube’s tool has already led to improper takedowns and demonetization for legitimate content.
Anna Naydonov of White & Case LLP warns against leaving in place a legal scheme that delegates constitutional questions to tech giants.
“I don’t think we should be in a position where these tech platforms are trying to figure out if something is criticism or First Amendment-protected parody,” she said.
Legal Patchwork
Despite the lack of an overarching federal ban, deepfakes do violate distinct areas of the law.
The right of publicity—protected by a patchwork of state laws and common law—addresses unlicensed commercial use of a persona to sell a product or service. Other deepfakes that are designed to deceive align more with defamation and privacy law.
Social media platforms’ legal obligations may be inconsistent even across the different types of deepfakes.
Section 230 of the federal Communications Decency Act, for example, exempts them from liability for user content, including defamation. But the shield explicitly excludes IP, and courts disagree as to whether that includes publicity rights.
The US Court of Appeals for the Third Circuit said the law didn’t protect Facebook, but the Ninth Circuit held “intellectual property” in Section 230’s exception only encompasses federal IP law.
The platforms’ entertainment industry partnerships likely incentivize them to do more than the bare minimum against deepfakes, attorneys said.
“It’s the safe move, and there’s been enough case law for YouTube not to go there,” Geoffrey Lottenberg of Berger Singerman LLP said. “But beyond that, who’s buttering their bread more? Is it the actual celebrities or the deepfakers?”
The decision-making process would in theory have to balance First Amendment principles surrounding mimicry against broadly objectionable deepfakes, he said.
“The risk to YouTube is the the legitimate content creators who may be in the parody, satire, or fair use space: You have to balance discourse, creativity against name, image and likeness rights,” Lottenberg said.
Protection Versus Censorship
YouTube keeps its exact criteria for takedowns confidential to avoid providing deepfakers a blueprint to toe—but not cross—the line. But those incentives could change if Congress enacts clarifying legislation.
The NO FAKES Act, S.1367/H.R.2794, introduced last year by a bipartisan coalition, currently has 13 cosponsors in the Senate and 10 in the House.
The bill, which mimics aspects of state right of publicity laws, is geared directly at deepfakes, defined as “digital replicas” of the “voice or visual likeness of an individual.” It incorporates exceptions for news, commentary, scholarship, satire, parody, and fleeting use.
But EFF and other groups have raised alarms over the bill, which shields platforms that remove material upon receiving a complaint. Without a pre-litigation path to restore the material or protect platform judgment calls, the bill would “inevitably lead to heckler’s vetoes and other forms of over-censorship” by people who want content removed for illegitimate reasons, the group said in a June 2025 article.
A Senate aide said an updated version of the bill that addresses those and other concerns will be introduced next week. The new bill introduces a counternotice system for contesting takedowns as targeting non-deepfakes or free speech.
The new version has backing from a broad array of stakeholders, the aide said.
But the updated version “is improved, but still rife with problems,” McSherry said in an email. The penalty for false notices “lacks teeth,” as the sender must only operate “in good faith,” and the bill remains “a recipe for over-broad enforcement that will sweep up all kinds of lawful content,” she said.
In any case, as Congress, platforms, and users continue to play catch-up on cultivating and applying a framework to address the powerful new tool, the technology continues to march forward.
“What’s happening now is the technology is moving so fast and getting so good, and the law is not,” Christian E. Mammen of Womble Bond Dickinson said.
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.