When OpenAI rolled out its video-generation tool Sora 2 in September, it immediately drew scrutiny from public figures, advocacy groups, and major talent agencies. Critics faulted OpenAI for designing Sora to permit the use of individuals name, image, and likeness by default.
These reactions led OpenAI to change its policy and to support proposals for federal legislation on digital replica use, such as the NO FAKES Act. This is just one example of why artificial intelligence tool developers should work with legal counsel to design their AI systems with appropriate legal guardrails.
The NO FAKES Act was reintroduced in April 2025 as a bipartisan Senate effort. It aims to balance responsible innovation with protection of creators and performers’ ability to retain control over their identity. It would potentially supplant, or at least partially preempt,the patchwork of state laws, and establish a federal right of publicity in the voice and visual likeness of individuals for the first time.
Sora Backlash
The response to Sora 2 crystallized one of generative AI’s biggest emerging battlegrounds: who controls identity in a world where voice, face, and persona can be convincingly recreated with a few lines of code. It also sparked serious policy conversations about balancing individual control and freedom of expression and innovation.
While OpenAI initially framed Sora as empowering (“you are in control of your likeness end-to-end”), prominent voices in Hollywood quickly raised alarms. Actor Bryan Cranston (of “Breaking Bad” fame) publicly expressed concern, for example, after discovering that users had generated unauthorized Sora 2 videos of him.
Other public figures and their estates have pushed back as well, requesting removal of the NIL of historical figures (such as Dr. Martin Luther King Jr.) and appeals to the public to stop sending generated images of loved ones who have died (including the daughters of Robin Williams and George Carlin).
Talent agencies also weighed in. The Creative Artists Agency issued a statement that Sora “poses risk to creators’ rights” and questioning whether OpenAI truly believes that actors, musicians, and athletes “deserve to be compensated and credited for the work they create.” United Talent Agency echoed this sentiment, stating that the use of another’s intellectual property and likeness “without consent, credit, or compensation is exploitation, not innovation.”
SAG-AFTRA leadership expressed concern that absent robust legal guardrails, mass replication of performers’ identities “are in danger of massive misappropriation by replication technology.”
In response to the backlash, OpenAI expressed “regret for these unintentional generations” and deployed an opt-out approach. This means celebrity and public figure NIL could appear in generated content unless rights-holders explicitly requested exclusion.
Open AI has since announced plans to provide more granular controls over which IP and NIL can be used to generate content, moving the default toward consent. OpenAI now employs an opt-in protocol for NIL, meaning their likeness should only be simulated if they (or their estates) actively agree.
As a part of OpenAI’s policy pivot, CEO Sam Altman has publicly stated that the company supports legislation to protect individuals’ likeness rights in AI contexts, including regulations such as the proposed NO FAKES Act. This shows how public backlash to Sora’s original opt-out structure resulted in a complete pivot, such that the CEO is now publicly supporting federal regulations.
Available Tools
Companies that want responsible AI development policies have several tools available to employ guardrails that minimize unauthorized use of individuals’ NIL. Examples include:
- Prompt filtering. Detects when user requests involve generating/editing content with identifiable people (such as prompting a tool to make an image of a celebrity endorsing a product)
- Consent. Prevents use of NIL unless a person opts in
- Context analysis. Distinguishes between educational or factual use (such as prompting an AI tool to identify the CEO of a company) versus commercial exploitation (such as making an ad with an influencer to sell sneakers)
Developers of software and tools that implement AI should consider responsible development approaches and potential legal issues before launching a product and waiting for public backlash. Generally, adopting an opt-in model will be safer. Also, implementing the additional tools listed adds additional protection against unauthorized use of NIL.
While there may be some legal defenses to mitigate liability for what others create using your tool, these are typically fact specific and haven’t yet been fully applied in the context of AI development. Even if the user is liable, the tool developer may have secondary liability. Taking steps to prevent misuse can help the tool developer defend against secondary liability. A knowledgeable attorney can help anticipate legal issues and suggest design options to mitigate risks.
Public figures and other creatives concerned about the use of their NIL and other intellectual property in AI tools should speak to an attorney with experience safeguarding IP portfolios and reputations to assess what IP might be vulnerable and the best way to protect it (whether affirmatively opting out of allowing these tools to use their IP or engaging in strategic licensing).
Like all emerging technologies, generative AI tools such as Sora 2 will test the boundaries between freedom of expression and innovation on the one hand, and individual control of their NIL and creatives’ control of their own work on the other. Developers and creatives must work with knowledgeable attorneys to stay up to date on policy and regulatory changes at the intersection of AI and IP.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners.
Author Information
Jim Gatto is a partner at Sheppard Mullin and a co-leader of the artificial intelligence team.
Chante Westmoreland is an associate with Sheppard Mullin’s intellectual property practice.
Write for Us: Author Guidelines
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.

