This week, lawyers were astounded by news of the artificial intelligence (AI) blunder that resulted in two New York lawyers submitting a legal brief that cited six nonexistent cases, invented by ChatGPT. While the lawyers in this case were ordered to show cause on June 8 as to why the court shouldn’t impose sanctions, courts around the country are grappling with ways to avoid similar future debacles with a technology that isn’t yet fully understood.
Judge Brantley Starr of the Northern District of Texas took an initial stab at it on May 30. “All attorneys appearing before the Court must file on the docket a certificate attesting either that no portion of the filing was drafted by generative artificial intelligence (such as ChatGPT, Harvey.AI, or Google Bard) or that any language drafted by generative artificial intelligence was checked for accuracy, using print reporters or traditional legal databases, by a human being,” Starr’s order said.
Starr’s order utilizes some extraordinary language to describe generative AI. (“These platforms in their current states are prone to hallucinations and bias.”) But lawyers should focus on the AI oversight requirement.
Ethical Obligation of Supervising AI
Artificial intelligence, unlike members of the bar, didn’t “swear an oath to set aside their personal prejudices, biases, and beliefs to faithfully uphold the law and represent their clients,” Starr said.
Make no mistake—while generative AI may do well on portions of the LSAT, law school exams, or the state bar exam, it’s not a member of the bar and can’t hold itself out as a lawyer.
That may be obvious, but the point needs to be made, over and over again.
Many states require that members of the bar swear an oath that they will conduct themselves in accordance with their state’s ethics rules. These rules are often modeled after the ABA Model Rules of Professional Conduct, which set strict obligations regarding the supervision of nonlawyers.
For lawyers who choose to use AI to assist them with their work, ABA Rule 5.3 (or a state equivalent) seemingly applies. The generative AI’s conduct as a modern-day “nonlawyer assistant” must be “compatible with the professional obligations of the lawyer.”
In addition, a lawyer must supervise the work of the nonlawyer, and is ultimately responsible for the nonlawyer’s work product. “The measures employed in supervising nonlawyers should take account of the fact that they do not have legal training and are not subject to professional discipline,” a comment to the rule says.
Lawyers, on the other hand, are subject to professional discipline, and must heed the laws and ethics rules of their state, or face sanctions. Those rules include Rule 11(b) of the Federal Rules of Civil Procedure, which states that by signing a pleading, motion, or other papers, an attorney “certifies that to the best of the person’s knowledge, information, and belief, formed after an inquiry reasonable under the circumstances,... the claims, defenses, and other legal contentions are warranted by existing law.”
An attorney’s knowledge, information, and beliefs cannot be suspended by an AI-generated output, nor can their ethical duties. New orders with respect to generative AI will remind them of that, so lawyers need to remain apprised.
Bloomberg Law subscribers can find related content on our Litigation Intelligence Center and In Focus: Artificial Intelligence pages.
If you’re reading this on the Bloomberg Terminal, please run BLAW OUT <GO> to access the hyperlinked content, or click here to view the web version of this article.
Learn more about Bloomberg Law or Log In to keep reading:
Learn About Bloomberg Law
AI-powered legal analytics, workflow tools and premium legal & business news.
Already a subscriber?
Log in to keep reading or access research tools.