ABA Warns Lawyers to Mind Ethics Rules While Using Generative AI

July 29, 2024, 3:00 PM UTC

Lawyers need to “fully consider” their ethical obligations when using generative AI tools, especially as the tech evolves quickly—indeed is “a rapidly moving target"— the American Bar Association says in an ethics opinion released Monday.

Attorneys over the last couple years have quickly adopted genAI tools like ChatGPT, Scribe, and Gemini, to boost the efficiency of everything from their electronic discovery tools, to contract analytics, to basic legal research.

But lawyers face several risks from the technology, the ABA warns in its 15-page report.

They range from making sure client data is protected, to ensuring candor toward the tribunal, and making sure to continue to charge “reasonable” fees.

None may be more important than providing clients a basic level of competence, as clients are promised in Model Rule 1.1 of the ABA’s Model Rules of Professional Conduct, the legal ethics guidelines most states have adopted in part or whole.

"[L]awyers may not leave it to GAI tools alone to offer legal advice to clients, negotiate clients’ claims, or perform other functions that require a lawyer’s personal judgment or participation,” the report advises. “Competent representation presupposes that lawyers will exercise the requisite level of skill and judgment regarding all legal work.”

The need for attorneys to make sure their work is accurate, and increasingly to double-check it, is vitally important in the age of generative AI, the ABA notes.

The report mentions just once the most egregious and public example of AI-fostered inaccuracies, often referred to as “hallucinations,” though the word is noted four more times in footnotes. ABA describes so-called hallucinations as “providing ostensibly plausible responses that have no basis in fact or reality.”

Last year, a New York lawyer got into trouble after using ChatGPT. The genAI tool created fake case citations and court opinions—responses to prompts that looked plausible and accurate but turned out to be “gibberish.” Since then, several other attorneys also have gotten into hot water for inadvertent use of AI-generated hallucinations in law firm or courtroom settings.

The report goes into some detail as it offers advice to lawyers about how much additional fact-checking will be warranted when using a generative AI tool to do legal work. It depends on the task, the ABA says.

“The appropriate amount of independent verification or review required to satisfy Rule 1.1 will necessarily depend on the GAI tool and the specific task that it performs as part of the lawyer’s representation of a client,” the report says.

The report also notes the risks raised by genAI tools regarding disclosure of confidential information—which would violate a lawyer’s duty under Model Rule 1.6. "[T]his risk analysis will be fact-driven and depend on the client, the matter, the task, and the GAI tool used to perform it,” the report says.

When use of a genAI tool poses a risk of direct or indirect disclosure of confidential information, a client’s informed consent is needed before inputting information regarding the representation into such a tool, the ABA says.

Lawyers also must make sure to take care to adhere to Model Rule 1.4, regarding communications with clients.

The report spells out several instances in which it’s clear and incumbent upon lawyers to be upfront with clients about their use of generative AI tools in their work, including when they’re asked by their clients about their use of AI, or if a lawyer proposes to input information relating to the representation into a genAI tool.

But in many other cases, the answer may not be simple. “It is not possible to catalogue every situation in which lawyers must inform clients about their use of GAI. Again, lawyers should consider whether the specific circumstances warrant client consultation about the use of a GAI tool, including the client’s needs and expectations, the scope of the representation, and the sensitivity of the information involved,” the report says.

To contact the reporter on this story: Sam Skolnik in Washington at sskolnik@bloomberglaw.com

To contact the editor responsible for this story: Martina Stewart at mstewart@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

See Breaking News in Context

Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.

Already a subscriber?

Log in to keep reading or access research tools and resources.