X.AI Corp., the maker of the Grok chatbot, knowingly designed the technology to create sexually explicit imagery of real people and children, three anonymous plaintiffs alleged in a proposed class action complaint filed Monday.
The artificial intelligence company founded by
“Like a rag doll brought to life through the dark arts,” a child “can be manipulated into any pose, however sick, however fetishized, however unlawful,” the complaint said. “For the child, her identifying features will now forever be attached to a video depicting her own child sexual abuse.”
The company is facing another class suit over its alleged failure to implement safeguards that would stop the proliferation of nonconsensual sexual deepfake imagery. Multiple foreign governments have launched investigations into Grok’s role in spreading such imagery.
The Monday complaint cited a report from the Center for Countering Digital Hate, an NGO advocating for internet safety, that reviewed a sample of 200,000 images out of 4.6 million produced using Grok between Dec. 29, 2025, and Jan. 8, 2026. It found that Grok generated 3 million sexualized images, including 23,000 that appeared to depict children. Other researchers have found Grok has generated a tsunami of sexually suggestive or nudifying images every hour.
The three plaintiffs said they were all minors when they learned an anonymous user had created sexually explicit images of them and uploaded the pictures to Discord.
“Jane Doe 1 was taken aback at the verisimilitude of the depictions: other than the fact that she knew she had never been in those situations or done those things, she could not visually distinguish these images and videos as fake; they resembled real-life content in every way,” the complaint said. The images were made using photographs of her that she recognized, including ones at her school’s homecoming and from her yearbook, Doe 1 said.
The perpetrator was eventually arrested and the criminal investigator learned that the images of all three plaintiffs had been made with x.AI tools.
The company viewed the technology’s undressing capabilities as “a business opportunity,” the complaint said. Unsatisfied with the profits from Grok, x.AI “licensed and profited from the sale of their dangerous AI technology to third-party companies, often located abroad, who in turn sold subscriptions to customers who could, via their applications, use Grok to produce tailor-made child sexual abuse content,” the complaint said.
Now the three plaintiffs “will live every day with the constant anxiety of not knowing whether someone they encounter has seen this invasive and sexually explicit content created with images of them as children,” the complaint said.
They seek to represent a nationwide class of everyone who had images of themselves as minors altered by x.AI tools to produce sexualized content with identifiable features. They are asking for damages.
The plaintiffs bring claims under Masha’s Law, a federal law that provides civil remedies for victims of child porn and online exploitation, as well as claims under the Trafficking Victims Protection Act and California’s statutory right of publicity. They also bring design defect, negligence, emotional distress, public nuisance, and Unfair Competition Law claims.
The plaintiffs are represented by Lieff Cabraser Heimann & Bernstein LLP and Baehr-Jones Law PC.
The case is Doe v. X.AI Corp., N.D. Cal., No. 5:26-cv-02246, 3/16/26.
To contact the reporter on this story:
To contact the editor responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.