Generative artificial intelligence has crossed the threshold from novelty to everyday tool. From the palms of their hands, personal injury claimants are asking chatbots how much their cases are worth, while stressed debtors seek advice on which type of bankruptcy to file.
The answers they receive feel well researched and authoritative. But when AI-generated misinformation affects life-altering decisions, the damage can be swift and sometimes irreversible.
Legal professionals bear a responsibility to protect clients increasingly influenced by fast, cheap, and often dangerous guidance.
Wakeup Call
Earlier this year, three Morgan & Morgan attorneys were sanctioned by a federal judge in Wyoming for filing a motion that included multiple AI-generated citations of non-existent cases. They were found in violation of Federal Rule of Civil Procedure 11 for failing to conduct a reasonable inquiry into the facts and law before submitting their motion.
If the firm that bills itself as America’s largest personal injury law firm can be tripped up by AI hallucinations, imagine the risk for armchair attorneys. As injury victims and cash-strapped families turn to online resources for help, they are exposed to worrisome information.
For clients navigating the devastation of a personal injury or financial collapse, the promise of quick, affordable legal guidance is tempting. But without proper context or oversight, they are often led straight into making serious mistakes.
They Develop Unrealistic Expectations. AI-generated content often pulls from verdicts that have generated headlines and high settlements. Without proper context, clients may believe their case is worth far more than it realistically is—or that their path to relief will be straightforward and fast.
As a result, personal injury victims could reject fair settlement offers, while debtors may forge ahead with a bankruptcy filing for which they’re ineligible. They may pursue actions that ultimately set them back financially, legally, and emotionally because of incomplete or misleading chatbot responses.
They React to Inaccurate Data. AI’s large language models are trained on the input they’re given; they aren’t trained to practice law. Without current, accurate information, they can deliver faulty advice.
Case circumstances—such as types of property protected by bankruptcy, statutes of limitations on personal injury claim filings, or thresholds that determine whether a case qualifies for small claims, arbitration, or full litigation—frequently hinge on current, precise, jurisdiction-specific details that AI tools either mishandle or overlook entirely.
Even more concerning is that AI often makes things up. This poses risks to clients who may rely on faulty information about successful case outcomes and decide to pursue actions that potentially imperil their own outcomes.
They Misunderstand Their Rights and Obligations. AI tends to oversimplify or misrepresent critical aspects of the law. Vague phrasing regarding filing deadlines, documentation, or eligibility criteria can create a dangerously incomplete picture of how clients should proceed.
For instance, a California bankruptcy client might be dissuaded from pursuing bankruptcy if AI only alerts them to the nominal federal homestead exemption instead of the state’s more generous allowances.
By receiving incomplete but confident-sounding information from chatbots, clients risk giving up rights they didn’t know they had—or overlooking key obligations that could result in dismissed cases, lost claims, or penalties.
Responsible Legal Leadership
While attorneys can’t stop clients from using AI, they can guide them safely through its obstacles. These strategies can help legal professionals protect their clients, manage risk, and lead responsibly in an era of pervasive technology.
Proactively Educate Clients. Use your first interactions with clients, whether formal consultations or visits to your homepage or social media sites, to call out AI concerns before they take root. Explain the limits of chatbots and the risks associated with relying on AI advice.
Set Clear Expectations. Include disclaimers in your client agreements. While AI tools may be used for efficiency, legal decisions are always made and vetted by licensed attorneys. Advise your clients not to rely on outside tools—including AI statements—for legal advice without consulting your firm.
Integrate Additional Human Oversight. If you use AI to conduct research, generate correspondence, or draft documents, ensure that nothing goes out the door without thorough human review. Build transparency throughout your practice to set the example that even professionals treat AI as a tool and not an authority.
Favor Curated Resources Over Chatbots. Chatbots have their place for answering administrative questions or directing clients toward vetted resources. Beyond that, the information you provide on your website should be curated for accuracy and under your strict control, not transmitted by a virtual assistant.
Push for Protective Policies. Lobby for sensible guardrails, such as disclosure rules for AI-generated legal content or standards that must be adhered to when litigants are marketed to via chatbots. AI isn’t going away, but lawyers can help shape the generative space, so it better serves the people who are most vulnerable to its misuse.
As AI continues to shape how clients seek and interpret legal information, the risks will only grow. Lawyers have the opportunity and the responsibility to intervene on behalf of vulnerable litigants and protect the integrity of the legal process.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners.
Author Information
Julie J. Villalobos is the founder of OakTree Law, a Southern California law firm specializing in bankruptcy and personal injury.
Write for Us: Author Guidelines
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
Learn About Bloomberg Law
AI-powered legal analytics, workflow tools and premium legal & business news.
Already a subscriber?
Log in to keep reading or access research tools.