Lawyers Caught Misusing AI Fuel Emerging Legal Education Sector

December 15, 2025, 3:00 PM UTC

When a federal judge asked California solo practitioner William Becker Jr. to explain why a motion seemed to be riddled with AI-hallucinated citations, he knew what he needed to do.

Becker, representing a defendant in a case involving former NFL punter Chris Kluwe, informed the judge that he’d taken “affirmative steps” to learn about professional responsibility issues around the use and misuse of AI, and that he attended a continuing legal education class on AI ethics for lawyers, promising to take another class soon.

“I take the Court’s admonitions seriously,” said Becker in his Oct. 29 declaration. “I found it highly instructive and am integrating its guidance into my practice.”

State bar and law firm leaders say attorneys such as Becker, and the growing number who have gotten into trouble for misusing AI like him, need to take greater responsibility for their AI use, and specifically when using generative AI programs like ChatGPT.

Legal industry veterans are developing a broad range of AI-focused educational programs to help lawyers avoid embarrassing pitfalls and more effectively harness transformative tech that’s already streamlining lawyer workflows.

“Lawyers are seeking more guidance, and we haven’t been giving them enough,” said Judge Xavier Rodriguez of the US District Court for the District of West Texas. “State bars are now recognizing that we need to do a better job,” said Rodriguez, who’s also a CLE instructor.

After OpenAI introduced a more advanced iteration of ChatGPT in November 2022, the rise in misuse of generative AI chatbots in legal filings was modest and gradual from 2023 to 2024, according to a Bloomberg Law analysis. But with litigants’ use of the technology becoming increasingly common, instances of litigants’ misuse exploded in 2025, from 31 in the first quarter to 167 in the third.

AI-focused CLE course offerings are proliferating, as pro se litigants, solo practitioners, small-firm lawyers, and “Big Law” attorneys increasingly face consequences for presenting courts with hallucinated, GenAI-devised citations and non-existent cases. Although tech advocates say CLE is a belated fix, it comes as scores of federal judges have issued standing orders and sanction orders meant to govern and guide AI use, according to BLaw data.

‘Take Responsibility’

State bar leaders, the ABA, and law firm administrators say CLE lessons often boil down to a simple directive: Don’t over-rely on a machine to produce accurate legal data, even one as seemingly knowledgeable and eager to please as a GenAI tool.

AI-focused CLE classes at Paul, Weiss, Rifkind, Wharton & Garrison stress fundamental legal ethics duties of candor, competency, confidentiality, and supervision, said the firm’s AI group chair, Katherine Forrest, also a former federal judge in Manhattan.

“You must take responsibility,” Forrest said. The submissions you make to the courts are on you, she said, “not the AI.”

“They’re among the most useful CLE classes out there,” Forrest said, given that law students mostly aren’t yet learning about AI in law school.

Some of the largest state bars in New York, California, and Texas are focusing efforts on providing CLE programming that emphasizes the essential role of human oversight.

“The duty of competence requires understanding how generative AI tools function and knowing both their appropriate and inappropriate uses,” said State Bar of California Executive Director Laura Enderton-Speed.

The technology doesn’t “replace sound judgment and discretion,” said Hedy Bower, the professional development division director of the State Bar of Texas. “It does not relieve lawyers of their obligation to review their work carefully and verify its accuracy,” Bower said, adding that AI tools don’t “exempt them from the duty to safeguard client confidentiality.”

Bower noted that attorneys should also learn about applying fair billing practices when using GenAI. “Efficiencies gained through AI must benefit the client when using hourly billing practices,” she said, adding that attorneys should disclose GenAI use to their clients and follow local rules when using the tools in court.

Client Service

In-house firm CLE administrators at four top-50 law firms say they’ve grown their number of GenAI-focused programs, in part by balancing lessons on how to avoid ethical missteps—while also instructing their lawyers, and at times their clients, on how they can use the tech to save time and money.

As lawyers’ use of AI tools is set to “exponentially increase,” the ways attorneys misuse them also will grow, said Merri Baldwin, a partner with Rogers Joseph O‘Donnell who taught Becker’s AI ethics CLE course. These programs “are not going away, and will adapt as uses evolve.”

The amount of training on AI tool usage has increased, as tools have become more specialized, said Allen Waxman, a New York-based of counsel with DLA Piper’s AI practice. Lawyers have told the firm, he said, that understanding the underlying technology “made it easier to understand the risks and limitations of the tools.”

Attorneys are eager to explore AI’s “transformative potential” to elevate client service, from accelerating document review to generating strategic insights, said Michelle Carter, King & Spalding’s chief lawyer talent development officer.

At the same time, she said, “We ensure they understand how to navigate risks and apply safeguards without slowing innovation.”

As for Becker, Judge Fred W. Slaughter, who had ordered Becker to show cause why he should not be sanctioned, did impose $2,000 in sanctions on the attorney for “alleged citations to non-existent cases and misrepresentations of existing case law,” a Nov. 4 order of the US District Court for the Central District of California said.

Slaughter noted that Becker, who didn’t respond to BLaw’s request for comment, had a duty to comply with California’s rules of professional conduct.

Sanctions were warranted against Becker for “his failure to verify the validity of the AI-generated material, and submitting the AI-tainted Motion to the court,” the judge said.

Though Slaughter didn’t order Becker to attend CLE courses, he took note of the lawyer’s decision to “educate himself” on professional responsibility issues arising from AI misuse.

To contact the reporter on this story: Sam Skolnik in Washington at sskolnik@bloomberglaw.com

To contact the editors responsible for this story: Blair Chavis at bchavis@bloombergindustry.com; Martina Stewart at mstewart@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

See Breaking News in Context

Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.

Already a subscriber?

Log in to keep reading or access research tools and resources.