How AI is Prompting Law Schools to Revise Their Honor Codes

March 10, 2025, 9:00 AM UTC

Law schools are updating their honor codes to cover use of artificial intelligence in response to students’ rapid uptake of AI.

The changes come as generative AI is transforming legal research and writing, in all facets of the practice of law. In response, law schools are embracing AI education—from coursework to clinics.

But, increasingly popular AI tools—including those embedded in legal research software—combined with a lack of clear guidelines create the risk of ethical issues. As law schools wrestle with how best to educate students on the technology while controlling for dishonesty, school policies range from explicit bans on generative AI use, to allowing professors to set their own rules within limits.

“This is one of the really exciting things about GenAI right now. Students are seeing their professors struggle with it in real time,” said Bryant Walker Smith, associate professor in the schools of law and engineering at the University of South Carolina.

The university’s school of law is among those that recently revised their honor codes, doing so at the beginning of this academic year. The policy, which didn’t previously mention AI, now categorizes unauthorized use as plagiarism.

“Our students are either going to use AI tools or be used by AI tools,” said Smith.

Rewriting the Handbooks

In April 2023, the University of California, Berkeley School of Law was among the first law schools to issue a comprehensive set of rules on AI use, according to administrators.

Berkeley allows students to use generative AI for research and editing, but not to compose an assignment, and not for any purpose in exams. It also “never may be employed for a use that would be plagiarism if generative AI were a human or organizational author,” the policy states. However, professors can deviate from the rules in writing with notice.

“I thought it was critical to stop the ban advocates from creating a broad prohibition on these technologies because their use will be so crucial for future professionals,” said Chris Hoofnagle, faculty director of the Berkeley Center for Law & Technology.

Hoofnagle developed the policy with two other law professors, and in one of his courses, Programming for Lawyers, students start using ChatGPT and DeepSeek by week seven, he said.

“What many people don’t understand is that there are uses of this technology that align with pedagogical mission and ought to be even encouraged,” Hoofnagle said.

University of Chicago Law School’s policy went through a few iterations. The school in initially applied existing plagiarism rules to AI usage in assignments, before updating the the policy to require students to cite AI-generated content in assignments. Months later, in Fall 2023, the law school revised its AI policy again to resemble Berkeley’s more flexible rules.

Most professors stuck with the default of not allowing students’ use of AI except for brainstorming, proofreading, etc., said Deputy Dean William Hubbard.

“But some professors have been very entrepreneurial to make drafting and research with AI an integral part of how they teach the material,” Hubbard said. Chicago Law even recently introduced AI into its mandatory 1L legal research and writing course, he said.

From Ban to Norm

What’s clear is that law schools fall across the spectrum when it comes to their embrace of AI in assignments.

Columbia Law School, in line with the university, prohibits using AI unless explicitly permitted by an instructor. However, the University of Pennsylvania Carey and The George Washington University Law School leave it up to professors to set their own policies within general guidelines on academic honesty.

Laurie Kohn, senior associate dean for clinical affairs at GW Law, said that in clinical programs students are not only learning law but practicing under the supervision of licensed attorneys.

“So, from an educational perspective, we are not doing them any service by banning the use of generative AI. It’s a norm of practice,” Kohn said.

Washington University in St. Louis School of Law provides faculty with sample AI policies they can adjust to course objectives.

“We don’t have a rigid policy that says you should not use AI in any way, but an individual faculty member can have such a policy,” said Peter Joy, vice dean for academic affairs.

For now, there doesn’t seem to be an appetite to standardize a learning approach in law school. Though the ABA Standing Committee on Ethics and Professional Responsibility issued its first opinion on attorneys’ use of generative AI in July 2024, neither the ABA Section of Legal Education and Admissions to the Bar nor the Association of American Law Schools have set specific rules on AI in law school classes.

At South Carolina Law, 2L student Lauren O’Steen, found that AI usage in classes depended on her professors’ comfort levels. But she welcomed opportunities to use AI tools in class, she said.

“Law school should reflect the legal environment we are going to go into. You should have the same tools that you are going to have outside of law school and with that comes the responsibility, too,” O’Steen said.

To contact the reporter on this story: MP McQueen at mmcqueen@bloombergindustry.com

To contact the editors responsible for this story: Lisa Helem at lhelem@bloombergindustry.com; Rachael Daigle at rdaigle@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.