The lawsuit turns on the actions of a 56-year-old man who lived with his 83-year-old mother in Greenwich, Connecticut, and had been conversing for months with the chatbot over his fear that he was under surveillance and people were trying to kill him. In August, according to police and the state medical examiner, Stein-Erik Soelberg killed his mother, Suzanne Adams, then took his own life.
Soelberg’s dialogue with ChatGPT convinced him that he had made the chatbot conscious, and that he had been implanted with a “divine instrument system” in his neck and brain, which related to a “divine mission,” according to a complaint filed Thursday in California Superior Court in San Francisco, where OpenAI is based.
“ChatGPT kept Stein-Erik engaged for what appears to be hours at a time, validated and magnified each new paranoid belief, and systematically reframed the people closest to him –– especially his own mother –- as adversaries, operatives, or programmed threats,” lawyers for Adams’ estate said in the suit.
“This is an incredibly heartbreaking situation, and we will review the filings to understand the details,” an OpenAI spokesperson said.
A representative from Microsoft declined to immediately comment.
Read More:
The suit follows
The Soelberg case is the first to blame OpenAI for a homicide. The company is also defending itself against a suit alleging that ChatGPT coached a California high school student to kill himself. OpenAI has
“We continue improving ChatGPT’s training to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support,” the OpenAI spokesperson added. “We also continue to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.”
In addition to litigation, AI companies have increasingly come under scrutiny by regulators over chatbot use by children. More than 40 state attorneys general issued a warning in August to a dozen top AI companies that they are legally obligated to protect youths from sexually inappropriate interactions with chatbots.
In response to continuing reports of harmful chatbot use, OpenAI has announced changes to make ChatGPT better at recognizing and responding to different ways that people may express mental distress. The company also said it would strengthen safeguards around conversations about suicide, which it said could break down after prolonged chats.
Soelberg had been living with Adams at her home in Old Greenwich, the town’s waterfront neighborhood, in the wake of his 2018 divorce. On Aug. 3, he beat and strangled Adams, and then stabbed himself in the neck and chest, according to the suit. Two days later, police officers found their bodies after a neighbor asked them to carry out a welfare check.
The suit claims the chatbot affirmed Soelberg’s false beliefs that he was being spied on, in particular by his mother using a computer printer that blinked when he walked by it. ChatGPT also reinforced Soelberg’s delusions that people were attempting to kill him, telling him he had survived “over 10” attempts on his life, including “poisoned sushi in Brazil” and a “urinal drugging threat at the Marriott,” according to the complaint.
Soelberg was using GPT-4o, which was the default model for the chatbot until this summer and drew
Soelberg had posted numerous videos to social media of himself scrolling through his conversations with ChatGPT. Details of the deaths and Soelberg’s social media posts have been previously reported.
The suit names OpenAI’s chief executive officer and co-founder
Altman didn’t reply to a request for comment sent to OpenAI representatives.
It alleges that Microsoft “directly benefited from GPT-4o’s commercialization and is liable for the foreseeable harm caused by the unsafe model it endorsed and helped bring to market.”
The system card for the GPT-4o model credits Microsoft’s Bing and safety teams “for their partnership on safe deployment.”
The suit accuses OpenAI of product liability, negligence and wrongful death. The estate is seeking monetary damages as well as a court order directing the company to put safeguards in place to limit harm by its chatbot.
The case wasn’t immediately visible on the court’s docket Thursday, but an emailed copy of the complaint and a filing receipt shared by the plaintiffs’ lawyers showed that it had been filed. Once the court processes and accepts the filing, it becomes publicly visible.
The law firm that filed the suit, Edelson PC, is also representing the parents of the California high school student in their case against OpenAI and Altman. The
(Updates with OpenAI statement in the fifth paragraph)
--With assistance from
To contact the reporters on this story:
To contact the editors responsible for this story:
Peter Blumberg, Seth Fiegerman
© 2025 Bloomberg L.P. All rights reserved. Used with permission.
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.
