Artificial intelligence is becoming one of the most prolific “speakers” on the planet, with large language models generating billions of statements daily.
But what happens when those statements turn out to be false and damaging? Can AI defame someone and, if so, who bears the blame? What First Amendment implications are at play? These questions are central to emerging litigation in the US and abroad.
Traditional search engines such as Google still hold more than 90% of the consumer search market share. But AI’s “one-right-answer” queries—to which it provides a direct, synthesized answer—is growing in popularity over a list of links.
Courts are likely to continue adapting traditional defamation law, built for human speech, to apply to stochastic text generators capable of hallucinating falsehoods at scale.
Defamation Law Basics
Under long-standing US defamation doctrine, a plaintiff must show a false and defamatory statement of fact, “publication” to a third party, fault, and damages. The framework presumes an identifiable human speaker. AI programs pose challenges for these elements.
Who is the “speaker”? When an AI program generates a provably false statement, courts must decide whether to treat the system’s owner, developer, or deployer as the legal publisher. If the AI is considered the company’s own product, Section 230 of the Communications Decency Act, which protects platforms from liability for user-generated content, may not apply. The AI company, in effect, becomes the “speaker.”
What is “fault”? Classic defamation law measures fault by the defendant’s mental state, namely, knowledge of falsity, reckless disregard for the truth, or negligence. Suffice it to say that algorithms aren’t capable of forming the requisite intent for liability.
Courts may ask whether the company acted recklessly in designing, training, maintaining, or deploying a model likely to generate false statements, or in failing to correct them after notice.
Proof of falsity and causation. AI hallucinations blur lines between fact and synthesis. Proving in court that an output/statement is as a matter of fact provably false and caused reputational harm requires technical transparency into how the model generated the statement. This presents a challenge for plaintiffs and courts, given current opacity in training data and prompt histories.
Enforcement Trends
Litigation in several states now tests these questions. Plaintiffs have filed suits alleging that AI chatbots fabricated criminal histories, invented defamatory quotes, or attributed extremist beliefs to them. Most of these cases have been dismissed or settled quietly, often before producing substantive rulings.
Still, several trends have emerged:
Narrowing of Section 230 immunity. Courts distinguish between hosting third-party content (which Section 230 immunizes) and generating (“contributed materially” to the creation of the content) it.
Negligence and product-liability analogies. Future plaintiffs may seek to frame false information provided by AI as a product-defect claim. They may assert that the system was defectively designed because it predictably generates harmful falsehoods.
Evolution of AI Discovery. Courts may soon require AI discovery in the form of logs showing training data provenance, moderation filters, and how complaints were handled to assess fault. One federal court has already required an AI company to preserve user logs.
Regulating AI
In 2024’s Moody v. NetChoice, LLC, ruling, the US Supreme Court reaffirmed that the First Amendment protects editorial decisions of online platforms, including decisions about how to display, promote, demote, or exclude third-party content. Whether those judgments are made by humans or algorithms was deemed irrelevant.
The court held that the act of “arranging and presenting others’ speech” carries the same constitutional protection newspapers and broadcasters enjoy.
By grounding the decision in classic editorial-freedom doctrine, the court created an early roadmap for defending AI-mediated publication decision and indicated an inclination to apply strict scrutiny to laws seeking to force “viewpoint neutrality” in curation or ranking.
Across the Atlantic, the emerging picture is also fluid. The UK’s Defamation Act 2013 and the EU’s forthcoming AI Act leave room for plaintiffs’ counsel to assert that automated systems cause reputational harm.
European regulators, however, emphasize duties of care and explainability rather than strict speaker fault. Future cases could evolve into a hybrid of tort and compliance enforcement.
Practical Implications
These developments point to a clear litigation lesson: The defense that “the model made a mistake” faces material challenges. Developers and deployers should be prepared to be treated as the responsible publisher of their systems’ statements.
Here are key steps companies can use to reduce real-world legal risk.
Pre-deployment red-teaming and model audits. Identify high-risk outputs, particularly those that generate personal profiles, summarize individuals, or answer questions about people or organizations.
Avoid potentially actionable content. AI platforms must be prepared to establish that they didn’t step beyond editorial discretion into actionable conduct.
Notice-and-takedown protocols. Establish rapid-response processes to investigate and correct false outputs once notified. Courts likely will view failure to act after notice as evidence of reckless disregard.
Provenance tracking. Maintain logs of prompts, outputs, and moderation actions. A detailed audit trail can be used to establish good-faith efforts and may mitigate punitive exposure.
Human oversight and disclaimers. Disclaimers alone are insufficient. But human-in-the-loop review for sensitive topics such as crime, health, or politics can be used to demonstrate diligence.
Contractual allocation of risk. Enterprise customers demand indemnities for defamation or misinformation claims. AI companies should evaluate their fact-specific risk-to-insurance-coverage and ensure insurance alignment accordingly. Media/content, tech E&O/professional, IP infringement, cyber, employment, and D&O insurance policies deserve a close look.
Looking Ahead
As AI systems increasingly intermediate public discourse, courts and regulators are converging on the time-tested core principles concerning duties of care.
Companies deploying generative models owe a duty of care to prevent foreseeable harm, including reputational injury. The doctrinal vehicle is likely to matter less than the underlying policy goal of ensuring that entities profiting from algorithmic “speech” also bear responsibility for its accuracy.
That means stronger internal governance, better model documentation, and more transparent correction mechanisms. Companies that invest early in these controls won’t only reduce litigation risk but also earn public trust.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners.
Author Information
T. Markus Funk, a former federal prosecutor and conflict-deployed State Department lawyer, is a partner in White & Case’s litigation and white collar practice.
Hope Anderson is a California-based partner at White & Case’s privacy and cybersecurity practice.
Write for Us: Author Guidelines
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.
