ANALYSIS: When AI Prompts Are Evidence, Should Old Rules Apply?

July 17, 2025, 2:56 PM UTC

In a recently filed defamation suit in Delaware Superior Court, filmmaker and political commentator Robby Starbuck alleges that Meta is responsible for harmful statements generated by its AI system. Unlike other voice assistants (like Amazon’s Alexa) that retrieve fixed answers, Meta AI—a generative tool built on Meta’s Llama series of large language models—produces original text in response to user prompts, shaping its output based on language patterns and perceived intent. The result is content that feels authored, although no human wrote it.

So far, the case itself follows familiar defamation contours. But what makes this case legally provocative goes beyond ...

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.