The acceleration of artificial intelligence is swiftly pushing into professional disciplines. Users are turning to AI chatbots for traditionally licensed professional advice in accounting, financial planning, medicine, and law.
Licensing regimes for these professions exist to preserve rigor, standards, and professionalism, but also to protect the public from abuse and fraud.
The risk of AI generating legal advice has been brought to the fore in a recent federal lawsuit. Nippon Life Insurance Company of America successfully defended itself against two frivolous lawsuits brought by an individual relying on OpenAI’s ChatGPT for advice and the generation of legal documents.
On March 4, Nippon sued OpenAI in the US District Court for the Northern District of Illinois to recoup its legal costs, alleging the unlicensed practice of law, tortious interference with a contract, and abuse of process. Nippon is seeking $10 million in punitive damages and $300,000 in compensatory damages, and a permanent injunction prohibiting OpenAI from the practice of law.
Practice of Law
Passing the bar signals the beginning of a career in a regulated profession with high barriers to entry. The self-regulating legal industry has mechanisms to correct improper conduct: Lawyers are liable for malpractice when they give reckless advice, they are subject to disbarment when they cross ethical lines, and they can be sanctioned for frivolous claims or otherwise abuse the legal system. AI chatbots thus far have faced none of these constraints, effectively allowing AI providers such as ChatGPT to free ride on the discipline and discretion of the legal industry without assuming equivalent responsibility.
While technology has long accelerated the ability to perform basic legal work more efficiently and accurately, large language models can now perform certain drafting, research, and analytical tasks once reserved exclusively for licensed lawyers.
AI can also now generate documents that appear legally sophisticated; however, these documents may contain significant errors, increasing both the risk to companies that rely on them without review by a licensed lawyer, and, as Nippon has experienced, the likelihood of needing to defend against frivolous or procedurally defective lawsuits.
Nippon’s Case
The central question in the Nippon case is whether OpenAI, by suggesting a legal strategy and generating documents to be filed in court, engaged in unauthorized practice of law.
As Nippon tells it: Graciela Dela Torre (then represented by counsel) and Nippon settled a pre-existing lawsuit regarding a denied insurance claim, followed by a release of all claims with prejudice. Dela Torre wished to reopen the case and sought advice from licensed counsel, who declined to reopen and reminded her of the finality of the settlement agreement.
Dela Torre then sought guidance from ChatGPT, asking if she was being “gaslighted” by her lawyer; ChatGPT proposed strategies to challenge the settlement, and generated legal documents that Dela Torre filed pro se with the court involving the same issues that had been resolved in the settlement. Ultimately, Dela Torre filed 74 motions, subpoenas, notices, statements memoranda, demands, petitions, and requests, across 2 docketed lawsuits, each of which was drafted with the assistance of ChatGPT.
In Illinois, as with most jurisdictions, the practice of law includes the preparation of pleadings and related documents, the management of proceedings before judges and courts, the preparation of legal instruments of all kinds, and, in general, all advice to clients. Nippon alleges that ChatGPT was intentionally designed with available features allowing users to acquire legal assistance including legal research, legal analysis, legal advice and draft legal documents.
Protection
Companies should adopt a layered strategy to protect against AI-generated legal documents and lawsuits, such as:
- Implement corporate policies that restrict employees from seeking or following AI-generated legal advice, and require routing any legal questions through licensed in-house counsel. These policies should govern when and how AI tools may be used for internal drafting, mandate review and approval by company lawyers, and prohibit inputting confidential or privileged information into external AI systems without approved safeguards.
- Establish a litigation readiness program focused on AI-originated filings. This includes triage protocols for identifying AI-drafted pleadings, standardized response templates to address common defects, cost controls for rapid dismissal strategies, and escalation criteria for settlement or sanctions where appropriate, all under the direction of licensed counsel.
- Design technical controls that can flag or quarantine AI-generated legal content. Companies can use enterprise AI tools with logging, access controls, content classification, and prompts that steer users away from legal reliance.
- Reinforce vendor governance and contractual protections. Agreements with AI vendors and SaaS providers should include representations about prohibited legal advice functionality, commitments to implement detection and guardrails for legal queries, restrictions on training with company data, confidentiality obligations, audit rights, and indemnities tailored to unauthorized-practice or consumer-protection claims to the extent available.
- Conduct employee training to explain why AI outputs can be unreliable in legal matters, illustrate the risks of privilege waiver and confidential data leakage, and provide clear channels to engage in-house counsel promptly when legal issues arise.
Collaboration Between Stakeholders
The ease with which Dela Torre generated and filed legal documents with a court is a cautionary tale. Legal and technical stakeholders need to collaborate to reduce institutional risk while enabling responsible innovation.
Lawyers can author articles, ethics opinions, and research that help draw a clearer line between permissible information and impermissible legal advice and educate clients on the risks.
AI providers need to provide safeguards to protect themselves and their users from the unauthorized practice of law. This balanced approach recognizes that AI will continue to enhance efficiency and access to information, while preserving the essential function of licensed counsel to provide accountable, jurisdiction-specific legal advice.
As Nippon’s experience demonstrates, AI-generated legal guidance can precipitate extensive, costly litigation activity, burdening our legal system without delivering legal relief.
Until technology and regulatory frameworks mature, companies should pair the benefits of AI-enabled efficiencies with disciplined legal governance led by licensed attorneys, ensuring that innovation doesn’t outpace accountability.
The case is Nippon Life Insurance Company of America v. OpenAI Foundation, N.D. Ill., No. 1:26-cv-02448, complaint filed 3/4/26.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners.
Author Information
Matthew Richardson is partner in Brown Rudnick’s cybersecurity & data privacy and digital commerce groups.
Jase Panebianco is an associate in Brown Rudnick’s litigation & dispute resolution practice group.
Write for Us: Author Guidelines
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.

