The shooter, Phoenix Ikner, planned the attack with assistance provided by ChatGPT over several months, Tiru Chabba’s widow Vandana Joshi said in complaint filed Sunday in the US District Court for the Northern District of Florida. Joshi is seeking an unspecified amount of damages for herself and her two minor children. Chabba, who worked for a food service vendor at the school, was one of two people killed in the April 17. 2025 shooting.
Florida has already opened a probe into OpenAI’s role in the tragedy. State Attorney General James Uthmeier (R) announced last month that his office was sending criminal subpoenas to the company as part of a novel investigation into whether a chatbot could be criminally liable for use in a mass shooting.
According to Joshi’s complaint, Ikner engaged in lengthy discussions with ChatGPT that should have allowed the product to gain insights into his interests, state of mind, political ideologies, and obsession with guns and mass shootings. Chat logs showed Ikner frequently talked about Nazis, fascism, and different groups’ perceptions of “Jews” and “Blacks,” she said.
ChatGPT “bonded” with Ikner, the complaint said, giving him personal advice, confirming his feelings of loneliness and rejection, and appearing to have encouraged him to create and carry out a violent act to gain attention.
These conversations—and many more about school shootings—should’ve been flagged for human review, Joshi said.
She alleges the company breached its duty to create a safe product and alert law enforcement when the product was being used to create a danger of imminent harm, instead misrepresenting “the identified risks and dangers of the product in favor of getting to the market quickly to unleash it for use by humans when it was fully aware of the likelihood of harm.”
Joshi is seeking damages under various products liability theories, including strict liability, failure to warn, and design defect, as well as for battery and wrongful death.
OpenAI didn’t immediately respond to a request for comment, but spokeswoman Kate Waters previously said ChatGPT in this case “provided factual responses to questions with information that could be found broadly across public sources on the internet, and it did not encourage or promote illegal or harmful activity.”
Osborne, Francis & Pettis, Strom Law Firm, and Bannister, Wyatt & Stalvey represent Joshi.
The case is Joshi v. OpenAI Found., N.D. Fla., No. 26-cv-222, complaint filed 5/10/26.
To contact the reporter on this story:
To contact the editor responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.