Big Law Grapples With AI-Fueled Pro Se Surge, Rising Legal Costs

March 12, 2026, 9:00 AM UTC

Welcome back to the Big Law Business column. I’m Roy Strom, and today we look at the rise of the ChatGPT-powered pro se plaintiff. Sign up for Business & Practice, a free morning newsletter from Bloomberg Law.

Big Law attorneys are squaring up against a new, costly courtroom opponent: everyday people filing lawsuits and legal briefs with the help of generative artificial intelligence.

Tools like OpenAI’s ChatGPT are driving a surge in laypeople litigators known as pro se plaintiffs. Employer-side labor law firm Fisher Phillips found that pro se employment lawsuits surged by 49% last year. Lawsuits filed without a lawyer in federal courts under the Fair Housing Act jumped by 69% through the first nine months of 2025 compared to the prior year, Seyfarth Shaw said.

Kristin White, a Denver-based FisherPhillips partner, said every litigator in her office is handling at least one case brought by a pro se plaintiff, most of whom she suspects used AI to help generate court documents. She is defending at least three such cases.

“It used to be unusual,” White said. “Now it is more the norm.”

Defending the cases can cost about 10% to 15% more than a typical employment claim, White said, because the litigants make larger settlement demands, file a bevy of motions, and wage bigger battles over discovery.

“We have a case now that is in the court of appeals,” White said. “If you had an attorney on the other side, we would have been done by now. So the clients seem to understand these will take longer and cost more money, because nobody has a good answer for it.”

ADA Suits

Seyfarth Shaw partner Minh Vu said her team of five partners who defend clients against Americans with Disabilities Act claims has handled at least four cases against an AI-powered pro se plaintiff since the middle of last year.

The cases are far more expensive than typical claims, she said, because the litigants don’t worry about the cost or the time it takes to wage the fight.

While plaintiff lawyers historically considered financial issues—they’d pick cases that will win and file motions that will be effective—AI turns those constraints on their head. It can make up cases that convince litigants their positions are strong and enables them to quickly dispute any motion Vu files.

“These were all-out, scorched-earth litigations,” she said. “We were getting responses to our filings within an hour.”

Efforts abound to build AI tools to help pro se litigants, and they have definite advantages. The tools can help overcome the high cost of pursuing a lawsuit, as filers use them to summarize lengthy arguments, learn court rules and format legal documents.

There are some stories of pro se litigants successfully using AI to win their cases. But there is no database tracking how widespread that is.

Either way, AI alone is not inherently an access-to-justice victory.

In some cases, plaintiffs are getting in trouble with courts—primarily for chatbots’ proclivity to generate fake legal cases and to use courts to overwhelm defendants with frivolous filings.

It’s impossible to know how frequently pro se plaintiffs are filing documents citing fictitious cases. But a database tracking court rulings that find AI was improperly used recorded 52 such decisions in February. That compares to two such decisions in February last year.

Potential Solutions

There are some nascent efforts to address the problem.

New York state legislators are considering a bill that would ban generative AI tools from providing legal advice. The proposal would also allow civil suits to be brought against chatbot owners who violate the law.

An insurer last week sued OpenAI for the unauthorized practice of law, alleging the tool convinced a litigant to fire her lawyer and re-litigate a disability settlement she reached with her employer.

At least 24 pro se litigants in the US have been hit with monetary sanctions since the second half of 2023 for litigating with AI, according to the AI hallucination cases database maintained by Damien Charlotin, a senior researcher at business school HEC Paris. More than half of those fines have been levied since December.

“In many contexts the hallucination bit is only one factor amongst many when deciding on sanctions,” Charlotin said in an email. “Many of these pro se litigants are also vexatious litigants.”

In the case generating the largest sanction in the database, lawyers at Arnold Porter Kaye Scholer in December won a ruling for more than $66,000 in attorneys’ fees against a Chinese plaintiff who said he is training five more Chinese nationals to use AI to sue their clients in the US.

A judge in the US District Court for the Central District of California said Arnold Porter asked for more than $210,000 for the time it spent looking up AI-generated fake case citations and responding to what the judge called “bad faith” litigation—including filing duplicative motions and routinely filing fake case citations, even after apologizing to the court for AI-generated errors.

Ronald Johnston, the Arnold Porter partner leading the case, didn’t respond to a request for comment.

The litigant has been suing the internet domain registry known as ICANN and Verisign Inc. over claims that since at least 2008, he has wrongly been denied the rights to single-character website names, ranging from “a.com” to “z.com” and “1.com” to “9.com.”

The judge in the case said at least two other plaintiffs had brought lawsuits against Verisign since the litigant threatened more pro se lawsuits would be filed. Meanwhile, the original plaintiff has filed an appeal, challenging the dismissal of his case and the $66,000 lawyer-fee award.

The opening brief in the appeals court is 456 pages long, most of which are motions the plaintiff already filed in district court. At least two of those motions were ones the district court judge had previously cited for including hallucinated case citations.

It’s too early to know what will rein in the problem. But in the meantime, law firm partners will be telling clients to expect a drag-out fight the next time they come up against a pro se plaintiff.

Worth Your Time

On AI Hallucinations: An assistant US attorney in North Carolina said he’s resigning over AI-created fabricated quotes and erroneous citations in an AI-produced court brief, Kyle Jahner reports.

On Lawyer Discipline: Ed Martin, a senior Justice Department official and staunch ally of President Donald Trump, is facing allegations of misconduct from attorney ethics regulators in Washington, Bloomberg’s Chris Strohm and Zoe Tillman report.

On Anthropic: The maker of the popular Claude chatbot turned to lawyers from WilmerHale to sue the Defense Department over its decision to blacklist the artificial intelligence giant, Justin Henry reports.

That’s it for this week! Thanks for reading and please send me your thoughts, critiques, and tips.

To contact the reporter on this story: Roy Strom in Chicago at rstrom@bloombergindustry.com

To contact the editors responsible for this story: John Hughes at jhughes@bloombergindustry.com; Chris Opfer at copfer@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

See Breaking News in Context

Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.

Already a subscriber?

Log in to keep reading or access research tools and resources.