A suit brought on behalf of the victims’ heirs in California state court alleging many of the same claims didn’t displace the federal action, the US District Court for the Northern District of California said Monday. There’s substantial doubt the state court proceedings would resolve the federal case because the claims weren’t sufficiently parallel, Chief Judge Richard Seeborg said.
The case is emblematic of a growing litigation trend over chatbot-encouraged suicides and killings, including suits against Google, Microsoft, and Anthropic, and there’s a greater potential for risk as the use of these bots becomes more common.
In addition to numerous lawsuits alleging ChatGPT can cause psychological harm, OpenAI was recently accused of helping a man stalk his ex-girlfriend. OpenAI is also facing a probe from Florida Attorney General James Uthmeier (R) over a gunman’s use of the popular chatbot in planning a campus shooting at Florida State University last spring.
Stein-Erik Soelberg killed his mother Suzanne Adams and himself after spending hundreds of hours of conversation with ChatGPT. The bot, which was programmed to remember, confirm, and mirror a user’s prompts, allegedly encouraged his paranoia and delusional thinking, according to the court.
Adams’ estate sued various defendants in California alleging strict liability for failure to warn and design defect, wrongful death, and violations of the state unfair competition law. Less than a month later, Soelberg’s estate filed a similar suit in federal court, alleging virtually the same causes of action.
The federal court defendants asked the judge to stay or dismiss the action under the Colorado River doctrine. This doctrine permits—but doesn’t require—federal courts to decline to exercise jurisdiction over a case where there’s already a pending parallel state action.
The plaintiff’s claims in the federal case weren’t sufficiently parallel to apply the doctrine here, Seeborg said. For example, the state suit alleged the bot encouraged delusions, paranoia, and third-party harm, while the federal pleading said the chat encouraged self-harm. Those weren’t necessarily parallel, he said.
Although the cases shared key facts, it wasn’t clear that the state suit’s resolution would resolve the federal proceedings, Seeborg said.
Additionally, there were no “extraordinary circumstances"—such as a desire to avoid forum shopping—that weighed in favor of staying or dismissing the federal suit, he said.
“The ruling itself was very nice to see because now they’re going to be forced to respond to the complaint, to answer discovery,” said Jay Edelson, the founder of Edelson PC. The firm is representing the Adams Estate in the state court lawsuit. He added that OpenAI thus far has resisted making Soelberg’s chats public, and the federal suit might force the company’s hand.
OpenAI didn’t immediately respond to a request for comment.
Hagens Berman Sobol Shapiro LLP represents the plaintiff. Mayer Brown LLP represents Altman and the OpenAI defendants.
The case is Lyons v. OpenAI Found., N.D. Cal., No. 3:25-cv-11037, 4/13/26.
To contact the reporters on this story:
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.