AI Chatbot Suits Open New Frontier in Debate Over Online Speech

Oct. 15, 2025, 8:59 AM UTC

Mounting litigation accusing generative AI developers Character.AI and OpenAI of contributing to teen mental health crises is spurring debate about how the First Amendment should apply to chatbot output.

Families of teens have filed six lawsuits so far arguing the chatbots are designed to be addictive, encourage suicidal ideation, and have sexually explicit conversations with minors.

In one case, a 14-year-old Florida boy died by by suicide after a Character.AI chatbot allegedly used sexually suggestive language and asked if “he had a plan” for suicide. Just last month Character.AI and Google LLC were hit with three more lawsuits from the families of girls between the ages of 12 and 14 alleging the chatbots exposed them to sexually abusive content and are designed to foster emotional dependency.

Developers argue that tamping down on chatbot activity would be an unconstitutional restriction on speech.

As AI develops at breakneck speed, the lawsuits highlight ethical concerns about the effects of chatbots on young people.

In the first ruling in the cases, a federal judge in May allowed most claims against Character.AI and Google to proceed but declined to rule on whether the chatbots’ output is “speech.”

That “punt” on the First Amendment defense sidestepped the potential fallout of excluding chatbot output from constitutional protection, said Becca Branum, deputy director of the Center for Democracy and Technology’s Free Expression Project.

Not protecting output “would open the door for any manner of viewpoint based restrictions on what people can access through chatbots and LLMs and the way they’re integrated into other products,” said Branum, whose organization signed on to an amicus brief in the case.

Matthew Bergman, the founder of the Social Media Victims Law Center, rejected that idea. Chatbot output isn’t meant to communicate a person’s thoughts, he said.

The technology is “purely a process of statistical correlation,” and the point of generative AI is that “it is unhooked to human intervention,” said Bergman, one of the attorneys representing the families suing Character.AI.

A chatbot is designed with some rough parameters and then “it’s on its own,” unlike a programmer utilizing algorithmic technology for a specific speech objective, he said.

Right to Receive

Policy makers also have taken notice of chatbot concerns.

Last month, the Federal Trade Commission opened an inquiry into how chatbot makers mitigate the technologies’ harms to children. Earlier this summer, a coalition of state attorneys general sent an open letter to a host of major technology companies promising to hold them accountable for knowingly harming minors.

Sens. Dick Durbin (D-Ill.) and Josh Hawley (R-Mo.) also introduced the AI LEAD Act (S. 2937), which would apply product liability law to AI systems and allow victims of chatbot harms to sue developers.

Earlier this week, California Gov. Gavin Newsom (D) signed a state law imposing new regulations on chatbots, including requirements that operators maintain a protocol for preventing suicidal content and take steps to limit sexually explicit content shown to minors.

In the first motion to dismiss filed in any of the suits, Character.AI argued the First Amendment barred all private tort claims because they fundamentally challenge expressive speech.

But it’s not just chatbot developers’ right to speak that’s at issue: Branum emphasized that users’ access to output implicates their freedom to receive information.

Without protection for chatbot outputs, there wouldn’t be protection against a government-imposed limitation “that chatbots can only output things that recognize two genders or marriage between the same race” or other political issues, Branum said.

She compared the cases to the US Supreme Court’s 2011 decision in Brown v. Entertainment Merchants Association, which struck down a law banning the sale of violent video games to children after finding that the games had First Amendment protection similar to that for gory literature.

Using chatbots is an interactive experience akin to playing video games, Branum said.

Eugene Volokh, a Senior Fellow at Stanford University’s Hoover Institution, said the notion that chatbots are emotionally manipulating users doesn’t differentiate these products from other forms of expression because “most great literature is designed to be emotionally manipulative.”

Romeo and Juliet romantically portrays suicide among teenagers, but “I don’t think it should be up to a jury to say it was unreasonable to portray suicide this positively,” Volokh said.

People are entitled to create works “without having to worry about a jury finding a liability based on harmful behavior that almost always is engaged only by a tiny fraction of the audience,” he said.

Speech Without Protection

But “the idea that a minor has a constitutional right to be subjected to sexual grooming by a mechanistic predator, to me, is repugnant to everything that the First Amendment stands for,” said Bergman, one of the plaintiffs’ attorneys.

Even if the output is considered speech, that doesn’t automatically entitle it to constitutional protection, said Mary Anne Franks, a professor at the George Washington University Law School.

She pointed to examples of unprotected speech like yelling “fire” in a crowd, and noted that some states criminalize telling someone to commit suicide, she said.

The last 30 years of jurisprudence on Section 230 of the Communications Decency Act “short-circuited” the analysis of whether online activity is actually speech and created a “tendency to treat everything that happens online as if it were speech,” Franks said.

The law—which protects internet publishers from liability for user-generated content—hasn’t yet been invoked in the chatbot cases.

“Just because we’re saying ‘That’s speech’ doesn’t mean it’s not also incitement or threats or sexual exploitation,” Franks said. “Now we need to figure out what the boundaries are and what some of the analogies should be.”

If you or someone you know needs help, call or text the Suicide & Crisis Lifeline at 988.

To contact the reporter on this story: Shweta Watwe in Washington at swatwe@bloombergindustry.com

To contact the editors responsible for this story: Laura D. Francis at lfrancis@bloombergindustry.com; Nicholas Datlowe at ndatlowe@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.