OpenAI Inc. CEO Sam Altman made headlines when he said there’s no legal confidentiality for conversations with generative AI tools such as ChatGPT. This prompted warnings that lawyers should avoid using generative AI tools for fear that doing so might waive the attorney-client privilege.
Yet, this initial statement and the responses to it conflate two important but distinct questions about generative AI use and the longstanding attorney-client privilege.
The first is whether information individuals share with generative AI tools is protected from discovery or compelled disclosure. The second is whether confidential information that lawyers input into generative AI tools when representing their clients is protected by the privilege.
The answer to the first question is relatively straightforward. Even if responses from generative AI tools resemble legal advice, the user prompts that lead to those responses aren’t privileged because the initial communication isn’t made with an attorney.
This holds true regardless of how secure (or insecure) the tool may be.
All that matters from the attorney-client privilege perspective is that no lawyer is present in the communication. In other words, just as information written in a diary isn’t privileged because no person is receiving it, inputting information into a chatbot isn’t privileged either.
Of course, if companies such as OpenAI could persuade lawmakers or regulators to create a general AI privilege, then users might become more comfortable sharing confidential information with those tools, and that might boost the financial bottom line of the companies that produce them.
But that doesn’t mean the answer to the second question—whether lawyers can use generative AI tools when representing their clients while also protecting the attorney-client privilege—is the same.
Although the attorney-client privilege is lost when information is shared with unauthorized third parties (absent certain narrow exceptions), simply using a third-party technological tool that gains access to confidential client information doesn’t categorically render the attorney-client privilege waived.
Instead, as courts and legislators have clarified in the context of email and cloud technology, the privilege is only waived if lawyers fail to take reasonable precautions to prevent disclosure, such as when the tools they use impermissibly disclose too much information to too many individuals too often.
Of course, there are situations in which lawyers can use generative AI tools in ways that would almost certainly constitute a waiver of attorney-client privilege—such as when ChatGPT conversations that were shared using the platform’s “share feature” were exposed publicly when Google indexed them.
But the idea that using any generative AI tool by definition constitutes a waiver of the attorney-client privilege is a prediction with a great deal of contrary precedent.
Ultimately, and perhaps counterintuitively, the real lesson from all of this might not be that lawyers can’t or shouldn’t use generative AI tools. Rather, the lesson should be that if individuals want to protect their confidential information but also gain the benefits of generative AI tools when asking legal questions, they should avoid using those tools themselves and instead hire lawyers who can use the tools in ways that are more likely to protect their confidential information.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners.
Author Information
Jonah E. Perlin is professor of law, legal practice at Georgetown Law and senior fellow at the Georgetown Center for Ethics and the Legal Profession.
Write for Us: Author Guidelines
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
Learn About Bloomberg Law
AI-powered legal analytics, workflow tools and premium legal & business news.
Already a subscriber?
Log in to keep reading or access research tools.