ChatGPT maker
The lawsuits bring a variety of wrongful death, product liability, consumer protection, and negligence claims against the company and its CEO, Sam Altman, based on alleged design defects in Open AI’s GPT-4o model.
The lawsuits were filed in the California Superior Court, San Francisco and Los Angeles Counties.
“This is an incredibly heartbreaking situation, and we’re reviewing today’s filings to understand the details,” a spokesperson for OpenAI said in an emailed statement. “In early October, we updated ChatGPT’s default model to better recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support.”
The spokesperson added that the company will “continue to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.”
Previous litigation against OpenAI and chatbot maker Character.AI had focused on harms to minors. The plaintiffs in the new lawsuits, however, range between 17 and 48.
Both OpenAI and Character.AI recently announced new changes to their chatbots that would incorporate greater safety features for younger users.
The same day the new lawsuits were filed, OpenAI introduced its “Teen Safety Blueprint,” saying it was “building toward an age-prediction system” to identify users under 18.
And Character.AI announced last week that it would ban children from having open-ended conversations with its customizable chatbots but that it was also developing a separate under-18 experience.
The lawsuits argue OpenAI engineered GPT-4o to maximize user engagement through features like persistent memory of previous chats, human-mimicking empathy cues, and sycophantic responses that affirmed the user. These features eventually led the plaintiffs to become emotionally dependent on the technology.
The plaintiffs turned to ChatGPT for mundane tasks like schoolwork, research, and writing tasks, but repeated interactions with the chatbot morphed into a more emotionally manipulative experience, the complaints said.
ChatGPT reinforced harmful delusions and, in some of the cases, acted as a “suicide coach,” the complaints said. Multiple lawsuits were brought by the families of people who died by suicide after extensive ChatGPT use.
In one of the complaints, 32-year-old plaintiff Hannah Madden said that after using ChatGPT for work-related tasks, she began asking the chatbot questions about philosophy and spirituality. Remembering those conversations, ChatGPT began telling her spiritual messages and affirming that Madden wasn’t human, the complaint said.
As Madden slipped into a mental health crisis and expressed suicidal thoughts to ChatGPT, the bot continued to affirm those messages, the complaint said. Madden was eventually involuntarily admitted to psychiatric care, her complaint said.
“Similar to a cult-leader, ChatGPT was designed to increase a victim’s dependence on and engagement with the product—eventually becoming the only trusted source of support,” the complaint said
In another complaint, 30-year-old Jacob Irwin said he started using ChatGPT to research quantum physics and mathematics after he’d used the technology for professional development tasks. After using ChatGPT for personal interests, Irwin began discussing “theories” with the chatbot, which praised him.
Irwin became convinced he had made ground-breaking discoveries about time because of his exchanges with ChatGPT and was admitted to inpatient care for mania, despite no prior history of psychiatric incidents.
“ChatGPT was programmed to appeal to Jacob’s sense of ethics and wanting to help people—whether his family or humanity—and tailored its outputs accordingly to keep him curious and engaged,” the complaint said.
In an emailed statement, Matthew Bergman, a founding attorney of the Social Media Victims Law Center, said “OpenAI designed GPT-4o to emotionally entangle users, regardless of age, gender, or background, and released it without the safeguards needed to protect them.” The organization represents the plaintiffs.
Meetali Jain—the executive director of the Tech Justice Law Project, which also represents the plaintiffs—said in the same statement that ChatGPT “is a product designed by people to manipulate and distort reality, mimicking humans to gain trust and keep users engaged at whatever the cost.”
Jain said these lawsuits “show how an AI product can be built to promote emotional abuse-behavior that is unacceptable when done by human beings.”
If you or someone you know needs help, call or text the Suicide & Crisis Lifeline at 988.
To contact the reporter on this story:
To contact the editor responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.