X.AI Corp. failed to implement safeguards that would prevent the creation and proliferation of thousands of non-consensual deepfake images on X, formerly known as Twitter, a class action suit alleged.
The maker of X’s AI chatbot Grok knew the danger the technology could pose to women and girls but “has chosen instead to capitalize on the internet’s seemingly insatiable appetite for humiliating non-consensual sexual images,” according to the complaint filed Jan. 23 in the US District Court for the Northern District of California.
The lawsuit comes amid increased global government scrutiny of how Grok is being used to create and distribute sexual deepfake imagery. Canada recently announced it was expanding an investigation that it launched in February 2025, and earlier this month Japan announced a probe into the chatbot’s role in spreading sexualized imagery of people without their consent. The EU also launched an investigation Monday into how Grok is being used to create and distribute child sexual abuse material.
Earlier this month, the US Senate unanimously passed the Defiance Act, which would allow victims to sue over non-consensual sexually explicit AI-generated images. The bill was introduced after widespread backlash to a flood of graphic content on X.
And a coalition of 35 attorneys general sent xAI a letter Friday to demand action that would prevent Grok from creating non-consensual explicit images and child sexual abuse material.
“You are investing tens of billions of dollars into developing powerful AI tools,” the letter said. “In doing so, it is your obligation to comply with the law and devote sufficient attention and resources to avoiding the kind of widespread harms and abuses we are seeing now.”
Grok’s makers “took little to no action to ensure” that it “would avoid producing non-consensual images of people in a sexualized or revealing manner,” the lawsuit said.
Unlike competitors like Google and OpenAI, xAI doesn’t use standard data filtration methods to remove sexual and abuse content from training data, the complaint alleged. If it had, Grok wouldn’t be able to generate the deepfakes.
The anonymous lead plaintiff said that the day after she posted a photo of herself fully clothed, she woke up to find that Grok had used that image to create a revealing bikini photo and posted it on X.
Jane Doe said she experienced severe emotional distress after seeing the image and was worried her employer or coworkers would see the image. The image was visible for three days, and Doe missed work as she tried to get the post removed, the complaint said.
Doe was “overcome with disgust at the thought of what the X user who had asked Grok to create the deepfake of her was doing with the photo,” the complaint said.
She seeks to represent a nationwide class of everyone who was depicted in a sexualized image that Grok made without consent, as well as a South Carolina subclass. Doe is asking for damages.
Doe brings claims for design and manufacturing defects, negligence, public nuisance, and a violation of California’s Unfair Competition Law. She also brings claims for the appropriation of likeness, a violation of California’s Right of Publicity Statute, defamation, intentional infliction of emotional distress, and an intrusion into private affairs.
In response to a request for comment on the suit, xAI sent an auto-generated email stating “Legacy Media Lies.”
Doe is represented by Berger Montague PC.
The case is Doe v. X.AI Corp., N.D. Cal., No. 5:26-cv-00772, complaint filed 1/23/26.
To contact the reporter on this story:
To contact the editor responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.