ANALYSIS: Missouri AI Rules Could Impede Access to Justice

March 7, 2024, 10:00 AM UTC

There’s great potential for artificial intelligence to increase the access to information for both lawyers and nonlawyers. Yet Missouri seems determined to stymie access to AI tools for nonlawyer, self-represented litigants, as a court rule and a recent decision evidence.

Courts continue to create rules for the use of AI in practice—particularly in litigation—and the rules are bound to be helpful guidance for some legal professionals. It’s nevertheless crucial for judges to prioritize access to justice concerns while crafting these rules to ensure that the justice gap isn’t inadvertently expanded, as appears to be happening in Missouri.

Missouri’s Dictates

A Missouri court rule and recent decision involving the use of AI by pro se litigants have potentially put this group in an even more disadvantageous position.

No AI for Pro Se Litigants

So far, courts have imposed at least 22 rules addressing AI use in court, according to data from Bloomberg Law. Of those, a 2023 rule from the US District Court for the Eastern District of Missouri is the only one to exclusively ban self-represented (“pro se”) litigants (SRLs) from using generative AI to draft filings in court.

Generally, courts do not treat SRLs differently from attorneys—apart from according their filings a more liberal reading given the lack of formal legal training. However, the order out of the Eastern District of Missouri does treat SRLs differently from attorneys by singling only them out for the technology ban.

Not long after Missouri’s rule went into effect, a Missouri court issued an opinion that could further chill AI use by pro se litigants.

Sanctions for Pro Se Litigants

On Feb. 13, the Eastern District of the Missouri Court of Appeals dismissed the appeal of self-represented, nonlawyer business owner Jonathan Karlen and fined him $10,000 for “numerous fatal briefing deficiencies.” Karlen’s submissions included 22 fictitious, AI-generated cases, the court said.

The court also highlighted other deficiencies in Karlen’s brief, including the erroneous use of Missouri statutes and law, as well as a lack of an appendix, statement of facts, points relied on, and table of contents.

Some of the rationales the court used to explain its decision stem from duties generally imposed on lawyers, not pro se litigants. This creates a potentially harmful precedent to self-represented parties looking to get help from AI in their legal matters.

For instance, the court referenced the American Bar Association Model Rules of Professional Conduct a number of times when discussing Karlen’s conduct, yet these ethical obligations only apply to lawyers.

The court also cited a New York federal district court decision from June 2023 that sanctioned two New York lawyers for filing a brief that contained ChatGPT-generated case law. But the Missouri court failed to highlight some notable differences between these two cases.

In the New York case, the sanctioned parties were both licensed attorneys who drafted their own filings and were initially dishonest with the court about the deficiencies in their filings. Pro se appellant Karlen, however, is not an attorney, and he informed the court in his reply brief that he outsourced the drafting of his brief to a consultant; he believed that the consultant was an attorney; and that he wasn’t aware the consultant would rely on fictitious cases.

Despite the differences in the matters, the Missouri appeals court ultimately exercised its discretionary authority to award damages and sanctioned Karlen $10,000—double the amount each attorney in the New York case was fined.

Education Over Embargoes

There are genuine concerns about AI’s ability to be an efficient tool for pro se parties, especially given the hallucination-prone nature of the most accessible chatbots like ChatGPT. And the instance in Missouri does exemplify some of the risks pro se litigants may not be able to recognize or correct without a good understanding of the law.

However, broad sweeping rules banning the use of generative AI and the imposition of large sanctions on pro se litigants could have a chilling effect on access to justice initiatives involving AI-powered technologies that are actually designed to handle legal inquiries.

And there’s real momentum behind AI’s ability to increase access to justice and digestible legal information, especially for states like Missouri that fall below the national average of attorney access.

Judges considering imposing rules or issuing decisions similar to those in Missouri should instead consider providing educational resources to pro se parties about the limitations and shortcomings of publicly available large language models.

Bloomberg Law subscribers can find related content on our Legal Operations, ABA/Bloomberg Law Lawyers’ Manual on Professional Conduct, and Lawyer Development Toolkit resources.

If you’re reading this article on the Bloomberg Terminal, please run BLAW OUT <GO> to access the hyperlinked content or click here to view the web version of this article.

To contact the reporter on this story: Stephanie Pacheco at spacheco@bloombergindustry.com

To contact the editor responsible for this story: Melissa Heelan at mstanzione@bloomberglaw.com

Learn more about Bloomberg Law or Log In to keep reading:

See Breaking News in Context

Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.

Already a subscriber?

Log in to keep reading or access research tools and resources.