ChatGPT, OpenAI’s provocative artificial intelligence program, has come close to passing the multiple-choice portion of the bar exam. The bot has also earned passing grades on law school essays that resemble ones written for the exam.
These feats are particularly notable because ChatGPT has not been to law school, paid for a commercial bar review course, or devoted all its energies to bar exam study for 10 weeks. Nor has it been trained on legal databases. It is very likely that, once ChatGPT is exposed to more legal materials, it will consistently pass the bar exam.
Alas, some percentage of the humans who take the exam will still fail. They would be able to find correct answers in seconds by using ChatGPT 2.0—or in minutes by retrieving the law from other sources—but won’t be permitted access to any of those sources on a closed-book bar exam. The humans who pass will need two full days to eke out a passing score; ChatGPT 2.0 will beat their scores in under an hour.
Added Human Value
Yet, even in a bill-by-the-hour world, clients who can afford it will still seek out human lawyers. Why? Because humans are far better than bots at eliciting facts and goals from clients, identifying new avenues of research, and solving multi-dimensional problems. Human experts will supplement those advantages by knowing when to consult AI, how to assess AI responses, and how to integrate AI knowledge with the human dimensions of a client problem.
When admitting candidates to law practice, those are the qualities we must assess—not the bot-like knowledge and skills tested by the current bar exam. That exam guarantees just two things: First, that those who pass will be able to do what the bot can do, but much more slowly and not as well. Second, that people of color will be disproportionately excluded from our profession.
Leave Memory to the Bots
The bar exam heavily tests candidates’ recall of detailed legal principles, like the arcane rule against perpetuities assessed on the most recent exam. A century ago, lawyers stored those principles in memory, pulling them out when necessary. But the explosive growth of legal rules since the New Deal has made that practice impossible: There are simply too many legal rules for any lawyer to accurately remember more than a tiny fraction of them. And good lawyers recognize that legal rules change, both over time and as lawyers move across state lines.
Rather than memorize rules, contemporary lawyers rely on sources. Statutory codes, treatises, handbooks, in-house knowledge banks, and electronic databases collect the legal rules that inform law practice. Competent lawyers know how to tap those sources, which ones are reliable, and which ones are most efficient for a particular purpose. When used by a professional, those sources vastly expand mental capacity.
The closed-book bar exam, in contrast, defines professional competence by the bounds of human memory. ChatGPT underscores the futility of that definition. When it comes to spitting out rules, properly trained AI will beat a human lawyer every time. The human lawyer’s competence lies in exploring the nuances of legal principles, identifying novel applications for those principles, seeking out relevant facts (rather than plucking them from a hypothetical), and counseling clients based on a deep understanding of their personal needs and contexts.
Reaching Outside the Box
The bar exam does not effectively test any of those professional skills. Nor, somewhat surprisingly, does it appropriately assess candidates’ competence at legal writing. Bar exams purport to do that through essay questions and performance tests, but the time limits for those exercises are bizarrely short.
What human writes well when given 30 minutes to absorb a page-long, densely packed fact pattern, recall the applicable rules of law, and then compose an answer? Or when required to synthesize 15 pages of novel facts and legal sources in just 90 minutes?
As ChatGPT demonstrates, well trained AI can perform these tasks in less than a minute—with flawless grammar, spelling, and organization. Astute clients will want lawyers who know how and when to use the bot, how to explore subtleties outside the bot’s comprehension, and how to devise creative solutions. Bots are better than humans at thinking inside the box, but humans excel at reaching outside the box.
Time to Rethink the Exam
Licensing programs based on practice portfolios, like Oregon’s Provisional Licensing Program, offer a more realistic evaluation of contemporary lawyering. Examiners in that program assess redacted written work produced for clients, as well as rubrics appraising client counseling and negotiation skills, rather than multiple-choice answers and hurriedly penned essays.
By focusing on the work that lawyers actually do for clients, the program offers a robust measure of lawyering competence—and leaves hasty legal analysis snatched from memory to the bots.
ChatGPT provides yet more evidence that time-pressured, closed-book written exams reflect outdated lawyering practices. Those exams perpetuate exclusionary practices without adequately protecting clients.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.
Write for Us: Author Guidelines
Author Information
Mary Lu Bilek is a former dean and professor of law, University of Massachusetts Law School and the City University of New York School of Law.
Deborah Jones Merritt is a distinguished university professor and the John Deaver Drinko/Baker & Hostetler Chair in Law Emerita at the Ohio State University Moritz College of Law.
To read more articles log in.
Learn more about a Bloomberg Law subscription.