Over the last few years, the legal profession has shifted from skepticism about artificial intelligence to large-scale adoption. Yet the single most influential legal technology of our lifetime remains absent from many law school curricula. This isn’t just a missed opportunity. It’s a failure to educate.
For some legal tasks, including research and drafting, AI tools already can outperform attorneys. But the tools aren’t perfect. Since 2023, courts have been inundated with more than 280 instances of “hallucinated” AI-citation filings. One lawyer was sanctioned after filing fake citations in Wyoming federal court, admitting that it was the first time that he “ever used AI for queries of this nature” and had just “come to learn the term ‘AI hallucinations.’”
Law students are trained to use traditional research tools such as Bloomberg Law, Lexis, and Westlaw, because the profession demands it. But the absence of specific AI training may leave the next generation of lawyers underprepared—risking ethical missteps, malpractice, and diminished client services.
AI no longer can be dismissed as a passing fad. It’s reshaping lawyers’ obligations, standards of diligence and competence, and the processes by which law is practiced. American Bar Association Model Rule 1.1 makes this explicit: “Competent representation requires the legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation.”
Rule 1.1’s official commentary underscores that competence requires keeping abreast of changes in law and practice. This includes “the benefits and risks associated with relevant technology.”
The dangers of neglecting this duty are already evident. The misuse of AI tools—most visibly in the filing of fabricated, AI-generated citations—is among the most glaring ethical lapses in the profession. Law schools can’t ignore this reality.
But AI education isn’t only about preventing missteps. E-discovery platforms can sift through thousands of documents and quickly identify key materials. AI-powered platforms can draft in minutes what once would have taken hours. Clients demand efficiency and firms are rapidly adopting these tools. Students who graduate without AI training will be at a disadvantage.
The best way to prepare law students for AI is to give them hands-on experience to assess its limitations and benefits. But law schools also must teach technical and ethical foundations. How do large language models work? Why do they hallucinate? What ethical guidance is in place?
At the same time, law schools must continue emphasizing the traditional skills of legal reasoning and writing, which will allow students to assess AI outputs.
The risks of inaction aren’t hypothetical. OpenAI Inc.’s own benchmarking showed that its previous o3 reasoning model, despite being touted for advanced “legal reasoning” capabilities, hallucinated 33% of the time. OpenAI’s other prior flagship model, ChatGPT-4.5, hallucinated on factual questions at a rate of 37.1%.
These staggering figures underscore why training must include not only “what hallucinations are” but also why they occur, their frequency, and how to detect them. This is especially important when ChatGPT remains the leading GenAI tool used by lawyers.
Even trusted legal research platforms still issue disclaimers on their AI outputs, reminding users that results error-prone and require verification.
In one recent case, an attorney at a top law firm admitted to relying on an AI tool that introduced a critical error into a court filing. Starting with a legitimate academic journal article, the attorney created a citation for it by using Anthropic PBC’s chatbot Claude. The system generated a fake title and authors, which the attorney then included in the filing—an unusual but revealing misstep where a legitimate source was corrupted into a false citation.
Attorneys also can face consequences for the AI missteps of co-counsel. In the Wyoming case, the offending attorney’s local counsel was fined because she signed the pleadings. The lesson is clear: Law students must learn not only how to use AI responsibly, but also how its use intersects with the ethical duties and practical realities of legal practice.
Law schools are notoriously slow-moving institutions. But in the age of AI, education must go beyond abstract discussion or the occasional elective seminar. Every graduating class without comprehensive AI training enters the profession with an incomplete understanding of a generation-defining technology.
Some object to teaching AI on the grounds that the tools “aren’t good enough yet.” A law professor at the University of British Columbia argued last year that Lexis+AI’s shortcomings meant that he would “strongly recommend delaying its release to law students.”
But law schools don’t—and can’t—dictate which tools lawyers use in practice. If tools such as Lexis+AI already are being used extensively, then schools must prepare students to encounter them. That isn’t an endorsement; it’s responsible education. If a tool isn’t “good enough yet,” then professors should explain why. That, too, is vital training.
The future of legal practice will be written with AI. The only question is whether law schools will prepare their students to properly harness that power.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners.
Author Information
Stefanie Lindquist is dean and law professor at WashU Law.
Oliver Roberts is co-director of the WashU Law AI Collaborative and an adjunct professor at WashU Law.
Write for Us: Author Guidelines
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.