A California lawmaker is exploring rules on how legal professionals use artificial intelligence—particularly the type that can generate text and other content on its own—when filing court documents.
Assemblymember Josh Lowenthal (D) introduced last month a measure (A.B. 2811) that would put in place disclosure and citation requirements around AI-assisted legal filings. Details on those requirements are still not available, said Guy Strahl, Lowenthal’s chief of staff, as his office is still working out specific language.
The bill comes amid debate and controversy over how AI will impact the legal sector. Already, some high-profile cases have captured the legal industry’s attention, such as New York lawyers submitting briefs citing non-existent cases fabricated by popular AI tool ChatGPT.
“There are multiple issues that need to be addressed” when lawyers use AI, said Bradford Hise, who advises attorneys on legal ethics at Hanson Bridgett. “This is a fascinating area, and it’s evolving very quickly.”
Precautionary Efforts
Most lawyers are already using some form of AI when prepping their work, Hise said. Legal research products like Westlaw or Bloomberg Law’s tools use the technology to help easily search for past cases or automate brief analyses, he added.
Questions arise when lawyers use generative AI like ChatGPT to completely do their work, such as writing briefs and other documents. It’s important if lawyers use such technology that they ensure everything produced is accurate, Hise said, from the text to the citations to the legal conclusion. The AI technology has the potential to “hallucinate,” or generate outputs that don’t make sense, such as making up court cases.
Across the nation, some policymakers are taking initial steps to keep inaccurate AI from affecting lawyers and their clients negatively. A policy memo for a New York City borough’s office, for instance, bans generative AI use for dispensing legal advice, arguing AI models lack an understanding of up-to-date legal principles.
In California and some other states, state bar associations have recognized the growing prevalence of the technology and are addressing the topic. The California State Bar last November issued guidelines, making it presumably the first regulatory agency for lawyers to enact AI guidelines.
The guidance calls for lawyers to disclose AI use to clients and to ensure a human is reviewing all AI-generated outputs. Anything produced should be examined for inaccuracy and bias, according to the document, and there should not be an overreliance on such tools. The bar noted this guidance is a “living document” that could be updated as the technology evolves.
The guidance is similar to what Lowenthal intends to address with disclosures and citation accuracy in his bill. In fact, the guidance called on the state bar to work with the state legislature on exploring law changes, including whether legal generative AI products need to be regulated.
California State Bar spokesperson Rick Coca said the bar isn’t involved and hasn’t taken a position on the Lowenthal effort.
Not Necessary?
Some legal analysts question if the bill is necessary. They note that there are already mechanisms in place that punish attorneys who submit false or inaccurate information, including from AI, such as sanctions and fines from a judge or a malpractice lawsuit from a client.
“There’s certainly a problem with incompetent lawyers. It’s just not a problem that the bill will solve,” said Eugene Volokh, a law professor at the University of California, Los Angeles.
Volokh also questioned whether disclosure requirements would have any effect. Usage of AI could become as ubiquitous as, for example, lawyers utilizing summer law students in preparing a brief or other legal document.
“So imagine there was a rule that said you have to disclose whether anybody who’s not a member of the bar worked on the briefs,” said Volokh. “All of these briefs would have this one sentence and the judges will ignore it.”
Ultimately, some attorneys contend it’s better to have the law profession be regulated by lawyers and judges, not legislators. State statute is much harder to change to keep up to pace with AI’s rapid evolution, they argued.
“I would be hesitant to support changes that address a technology that may, in fact, be superseded in two years, six months, whatever time frame,” said Hise. “I think it’s probably better for lawyers to look at technology through the lens of our existing rules of professional conduct.”
To contact the reporter on this story:
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.