OpenAI’s most recent update to its usage policies for ChatGPT provides a window into the company’s efforts to insulate itself from potential liability for handing out legal advice to its users.
The company’s update, effective Oct. 29, tweaked policies around how ChatGPT and other products can be used to provide legal and medical advice. Although some lawyers prematurely and inaccurately celebrated the change as an outright ban on giving legal advice, the update was more a change in wording. ChatGPT still produces legal advice including draft contracts, if asked to do so.
The disclaimer about legal advice is one way makers of large language models, including OpenAI, Google, and Anthropic, are protecting themselves from claims by shifting the liability burden. OpenAI’s policies, for example, also warn against using its products for defamation, intimidation, weapons development, gambling, and national security purposes.
“There is some level of responsibility on people to understand the machine is not your mother, your mechanic, and it’s definitely not your lawyer, your doctor, or your financial adviser,” said Brenda Leong, director of ZwillGen PLLC’s AI Division.
Shifting Liability to Users
OpenAI’s latest terms of use state users can’t use its services for “provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.”
The company’s intention is to “overtly shift as much responsibility to the user to say, ‘You use this at your own risk for these kinds of things,’” Leong said.
OpenAI, for its part, said it hasn’t made changes to how GPT behaves. The company didn’t respond to a request for comment about the intent behind its new terms of use.
“ChatGPT has never been a substitute for professional advice, but it will continue to be a great resource to help people understand legal and health information,” Karan Singhal, head of OpenAI’s health AI team, said in a post on X.
It remains to be seen if the company’s massaging of its terms of use will amount to an effective legal defense, said Oliver Roberts, co-head of the AI practice group at Holtzman Vogel.
“It’s just a shift in liability,” Roberts said. “But who knows if this is going to be sufficient.”
Other AI companies have argued in court that users are responsible for their outputs. Perplexity AI Inc.has argued that a copyright lawsuit claim should be thrown out because it is the result of “atypical usage” by users.
The legal arguments stem from the AI industry’s rapid development, said Tom Martin, a lawyer and maker of legal tech.
“Foundation model companies have kind of taken a view of any regulation including UPL or copyright or patent or anything else as something to be negotiated,” Martin said, referencing the unauthorized practice of law.
Unauthorized Practice of Law
Copyright suits are just one of the legal outcomes of generative AI’s growth. For the legal industry, a key question is whether the legal information the LLMs produce amounts to non-lawyers practicing law. State bar associations typically decide who can practice law by imposing licensing and certification requirements.
Technology has increasingly muddied the definition of the practice of law and who can provide such counsel. In 1967, a non-lawyer was convicted of a misdemeanor for publishing a best-selling book that gave out legal advice on estate planning, although the conviction was overturned on First Amendment grounds. Websites like LegalZoom, which prepare legal documents for a fee, have for more than a decade fended off claims that they are illegally practicing law.
The South Carolina Supreme Court determined in 2014 that LegalZoom wasn’t engaging in the unauthorized practice of law. The company was helping its customers fill out legal forms based on inputs made by those customers, which doesn’t amount to legal advice, the court found.
The first line of LegalZoom’s current terms of service directly specifies that it’s not a law firm. “I understand and agree that LegalZoom is not a law firm or an attorney, may not perform services performed by an attorney, and its forms or templates are not a substitute for the advice or services of an attorney,” the terms state.
The outgrowth of technological advances have made a nebulous question even harder to answer. All unauthorized practice of law claims depend on defining the practice of law.
But that’s a difficult definition to write up, said Anthony Davis, a legal ethics lawyer and partner at FisherBroyles. Oftentimes, the practice of law is simply defined as what lawyers do, Davis said.
“Doing legal research, everyone can research online,” Davis said. “Was Google practicing law when you searched a case on Google? I don’t think so.”
Further complicating the matter is that states have different definitions of the practice of law. So it will be up to individual states to decide if they want to pursue unauthorized practice of law claims against OpenAI or its users.
Malpractice Claims
By saying that it’s not providing legal advice and that users should consult a “licensed professional,” OpenAI is also making it harder for its users to pursue claims against it for legal malpractice, Davis said.
“In order to sue somebody for malpractice you need some kind of a contract,” Davis said, something that essentially states “I agree to give you legal advice.”
OpenAI also potentially has stronger footing to countersue any user who accuses it of malpractice: It can sue the user for violating its terms and conditions, Roberts, who teaches about AI policy and regulation at Washington University School of Law, said. The biggest effect of the policy could be on people representing themselves without attorneys, Roberts said.
“This is really just going to hurt pro se litigants who are going to, in practice, continue to use it for legal advice,” he said.
To contact the reporter on this story:
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.
