Oscar Brownfield—with AI help—was representing himself in Oklahoma federal court when he sought sanctions against his employer’s counsel, accusing them of knowingly filing false claims in an ongoing litigation.
His unsuccessful request to strike certain pleadings from the Cherokee County School District’s summary judgment motion backfired, however, when opposing counsel revealed that Brownfield’s artificial intelligence-supported motion cited fictitious cases.
The defense pursued $7,000 in sanctions to cover resources and the time it reportedly spent reviewing and responding to the filing. But last month, an Eastern District of Oklahoma magistrate judge fined Brownfield only $500 with a warning of a “more severe sanction” for future infractions.
Brownfield is preparing for trial on his claim of illegal removal from his roles as substitute teacher and wrestling coach after filing a Title IX complaint about gender discrimination faced by female athletes.
He’s also urging a district judge to vacate the “clearly erroneous” fine, claiming the magistrate’s order conflicts with precedent that grants “pro se litigants greater leniency” due to their lack of legal background.
Brownfield’s AI hallucination incident, which courts see as an abuse of the legal system, is among over a dozen pro se lawsuits against employers like Dell Inc. and Fox News Network LLC, where judges in recent months warned or sanctioned plaintiffs for AI-generated briefs with fake or misstated judicial precedents.
The proceedings coincide with a documented increase in pro se federal labor and employment lawsuits in recent years, fueled in part by litigants’ use of chatbots like OpenAI’s ChatGPT. Tension is growing as AI’s potential to democratize traditional access to justice clashes with the technology’s risks, legal observers say.
“We’ve seen better pleadings, certainly some more informed responses to motions to dismiss,” from pro se litigants, Chief Magistrate Judge Vera Scanlon of the Eastern District of New York said at a recent American Bar Association conference.
But she’s faced “hallucination problems,” recalling a recent review of a response brief that initially “seemed to make a lot of sense” but cited non-existent cases.
Verifying filings’ accuracy is increasing judges’ and defense attorneys’ workloads, diverting time from focusing on the case, several courts said.
Access to Justice
The ever-changing legal landscape can be difficult for non-lawyer litigants to navigate alone due to the significant resource gap between workers and corporations, legal observers said.
Despite hallucination issues, justice reform advocates see AI’s potential to expand access to court.
The World Justice Project’s 2025 Rule of Law Index ranks the US 112th out of 143 countries for “accessibility and affordability of civil justice,” citing factors such as litigation costs and income disparity.
Law schools like Cornell, New York University, and Stanford use AI tools to educate low-income individuals on their legal rights and how to navigate cases as pro se litigants.
Some plaintiffs or small firms have used AI to win cases, including a $27.5 million verdict last year in a retaliation suit against Dignity Health.
Tech companies present the AI tools “as democratizing access, whether to jobs or the court. But the question is, ‘Is that truly the case if those technologies aren’t yet quite up to snuff?’” said Ifeoma Ajunwa, an anti-discrimination and AI ethics professor at Emory Law School.
“I would love to see a future where AI technologies are created to enable pro se claimants to successfully petition the court. But that future starts with AI governance,” she said.
Management-side lawyers say pro se litigants use AI to file large volumes of submissions and complex document requests, leading to longer litigation and higher settlement value.
Judicial response varies. Some courts prohibit pro se litigants and lawyers from using the technology, while others require filers to certify they’ve verified their briefs’ accuracy.
Pro se plaintiffs aren’t bound by lawyers’ ethical and professional rules. Judges can sanction parties for misconduct, but are advised to balance enforcement to preserve access to justice.
Discovery’s New Frontier
Beyond hallucination concerns, defense attorneys’ demand for pro se litigants’ AI prompts and outputs during proceedings is a major legal issue.
Disclosing a plaintiff’s initial views of their claims shared with a chatbot—possibly different from court pleadings—and the tool’s use could be damaging, said Schwanda Rountree, a partner at Sanford Heisler Sharp LLP.
Two novel February federal court rulings addressed AI data’s discoverability and the applicability of attorney-client privilege’s protection of confidential legal communications and work-product doctrine, which shields legal strategies from disclosure.
A New York federal judge ruled that a defendant couldn’t shield his exchanges with Anthropic’s Claude about his securities fraud defenses because the chatbot wasn’t an attorney, nor used at his counsel’s direction or for legal advice.
Meanwhile, a Michigan federal judge held that a plaintiff’s pro se status makes her ChatGPT data in an employment discrimination case protected by the work-product doctrine.
“There’s much more to come,” Rountree said, anticipating future fact-specific litigation on the discovery issue absent a national precedential standard.
Evolving Client Relationship
Worker-side lawyers like Michael Ansell of NextGen Counsel see AI potentially upending traditional attorney-client relationships.
Many clients now request “limited representation,” wanting only review and revision of legal documents that Ansell said he suspects are AI-generated.
He’s also showing clients how inputs influence outputs “to knock them out of relying on ChatGPT and not trusting me,” Ansell added.
Plaintiffs’ attorneys are also modifying retainer agreements or welcome letters to bar clients’ AI use and address confidentiality concerns, Rountree said.
Brownfield, whose request to vacate his sanction failed earlier this month, didn’t respond to a request for comment. Court filings indicate he completed a certificate program on hallucination and ethical AI for $1,800 before being fined.
AI can’t replace attorneys, but using chatbots requires plaintiffs’ caution to protect their reputation, Rountree said.
Frivolous claims or false citations harm workers’ rights advocacy and complicate judicial decisions when courts aren’t “dealing with reputable legal arguments or facts,” she said. “It creates difficulty on all ends.”
To contact the reporter on this story:
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.
