Heppner Shows Attorney-Client Privilege’s Fragility in AI Era

March 10, 2026, 8:30 AM UTC

The Bottom Line

  • The ruling in United States v. Heppner concluded that a defendant’s chats with an artificial intelligence chatbot weren’t protected by attorney-client privilege.
  • The court’s reasoning extends beyond consumer services to some enterprise-grade services that allow nonlawyers to ask legal questions.
  • Businesses can mitigate these risks by implementing safeguards such as structuring legal chatbots as agents of counsel and having counsel oversee any use of AI in legal matters.

When FBI agents seized electronic devices from a Dallas mansion last November, they found 31 documents that the target had generated using Anthropic PBC’s Claude chatbot to prepare for meetings with his lawyers.

Judge Jed Rakoff of the US District Court for the Southern District of New York ruled in United States v. Heppner that the documents weren’t privileged—the first decision of its kind. Rakoff quoted approvingly from legal scholarship observing that all recognized privileges require “a trusting human relationship” with “a licensed professional who owes fiduciary duties and is subject to discipline,” a relationship that can’t exist between a user and an AI chatbot.

Early commentary has treated Heppner as a cautionary tale about consumer AI services, and the headline result is unsurprising: The defendant used a consumer tool whose terms expressly disclaimed any confidentiality. But the court’s reasoning reaches well beyond those facts. A close reading of the privacy terms at the leading AI labs reveals that several products marketed for “business” or “enterprise” use offer no more legal protection than the consumer services at issue in Heppner.

Companies that want to deploy AI for legal purposes must understand these risks and take concrete steps to preserve privilege.

The Decision

Attorneys for Bradley Heppner, former CEO of a financial services company facing fraud charges, asserted privilege over his chats with Claude, arguing that Heppner created the chats “for the purpose of obtaining legal advice” and shared them with his attorneys—no different from a client jotting down an agenda on a legal pad before meeting with counsel, which courts have long recognized as privileged.

Rakoff rejected the claim on three independent grounds. First, the chats weren’t communications between Heppner and his counsel. Claude isn’t an attorney, and “the discussion of legal issues between two non-attorneys is not protected by attorney-client privilege.”

Second, the communications weren’t confidential because Anthropic’s privacy policy allows the company to use chats to train its models and to disclose them to “governmental regulatory authorities” and other “third parties.”

Third, Heppner didn’t use the chatbot “for the purpose of obtaining legal advice,” because he used it on his own initiative, not at counsel’s direction. If counsel had directed the use, the court said it “might arguably” have functioned as a lawyer’s agent under the Kovel doctrine.

Work product protection also failed because the documents weren’t “prepared by or at the behest of counsel” and didn’t reflect counsel’s legal strategy.

Enterprise Privilege Risks

Enterprise AI services provide much stronger confidentiality protections, and use of such services wouldn’t constitute third-party disclosures under Heppner. The ruling is nevertheless significant for businesses for three reasons.

First, the court’s approach applies even to enterprise-grade AI services used by nonlawyers. Although enterprise-grade AI services satisfy the confidentiality requirement, the court held that two other elements of privilege weren’t satisfied: Heppner’s chats weren’t “attorney-client” communications because the AI isn’t an attorney. And the communications weren’t “for the purpose of obtaining legal advice” because the tool disclaims any ability to provide legal advice—a point the government proved by asking the chatbot and submitting its response to the court.

These arguments apply equally to nonlawyer use of AI in an enterprise setting. The decision thus carries important lessons for companies deploying AI services for nonlawyers to ask legal questions or submit requests to in-house counsel.

Second, the decision creates a privilege gap for separately represented executives. In internal and regulatory investigations, companies routinely arrange separate counsel for officers, directors, and senior executives whose interests might conflict with the company’s. But a separately represented executive can’t use a company-provisioned AI chatbot to prepare for meetings with personal counsel, because the company—not the executive—controls that data, including any privilege.

Under Heppner, the executive can’t use a consumer chatbot, either. This leaves executives who face investigation with no viable way to use AI in their own defense without risking privilege (though enterprise-grade tools provisioned by an executive’s own counsel may help fill the gap, as discussed below).

Third, the patchwork of privacy terms at AI labs creates practical litigation risks. The leading US AI labs maintain overlapping privacy policies, with distinct terms for consumer and enterprise services, updated frequently. An individual can access the same models through a consumer account with no privacy guarantees or through an enterprise account with robust protections.

Yet in Heppner, the government quoted from Anthropic’s consumer privacy policy without presenting documentary evidence establishing which terms actually governed this particular defendant’s account. The judge rejected the privilege claim before the defendant had any opportunity to submit an opposition brief.

This illustrates that companies using AI services from providers that offer consumer and enterprise tiers face an evidentiary burden: In fast-moving litigation, companies may need to prove on short notice which agreement governs a specific deployment. Using an enterprise-only provider with robust privacy protections for all customers gives a cleaner story that is easier to prove and harder for an adversary to muddy.

Consumer Privacy Gap

Rakoff’s ruling rested on the sharp contrast between what consumers might expect from AI chatbots and what the legal terms actually provide.

Consumers who have embraced AI chatbots for everything from getting health advice to navigating sensitive family matters may expect that paid AI services are at least as protective of their privacy as cloud email, which is often free.

In reality, consumer AI privacy policies from most mainstream providers don’t even mention the word “confidentiality” and offer no enforceable protection. AI labs reserve the right to disclose consumer chats to “government authorities, industry peers, or other third parties” “in [their] sole discretion” (OpenAI) or in any manner permitted by applicable law (Anthropic). Such terms allow the companies broad discretion to disclose chats even if the consumer opts out of model training.

Although Google permits disclosure to third parties only when “reasonably necessary to respond to any applicable law, regulation, legal process, or enforceable governmental request,” it still trains on consumer chats and warns users not to input “confidential information that you wouldn’t want a reviewer to see”—still enough to waive privilege over anything discussed with the bot.

Consumer cloud email and storage services enjoy much stronger protection. Courts have recognized a reasonable expectation of privacy in email, and the Stored Communications Act limits civil subpoenas to cloud providers and requires the government to obtain a warrant or court order before it can obtain emails.

Consumer chatbots likely fall outside this framework—as illustrated by the recent court order compelling OpenAI to produce millions of anonymized consumer chats to plaintiffs in copyright litigation, an order that likely would be prohibited if AI chats received the same statutory protection as email or cloud storage.

Companies should be careful to read the fine print, because the leading US AI labs all market products for enterprise, business, or professional use—such as Google’s Gemini Enterprise Starter edition, Anthropic’s Claude Team, or OpenAI’s ChatGPT Pro—that can be governed by consumer privacy policies. Some of these products are intended to be used by businesses with hundreds of employees. Under Rakoff’s reasoning, even use by in-house counsel of these “enterprise” tools might waive privilege.

Protecting Privilege

Despite these risks, companies can take concrete steps to deploy AI and obtain the efficiency and quality benefits it provides while preserving privilege.

Legal chatbots should be structured as “agents” of counsel. Under the Kovel doctrine, privilege can extend to communications with nonlawyer agents whose assistance is necessary to facilitate the attorney-client relationship, provided the agent operates under counsel’s direction and confidentiality is maintained.

Heppner leaves open the possibility that an enterprise AI tool deployed by in-house counsel as an interface for employee legal and compliance questions might qualify. To make a claim of privilege colorable, counsel must control the deployment of the AI tool, and summaries of legal queries should be sent to counsel for follow-up so that the tool functions as an agent of counsel, not merely software nonlawyers can use.

Strict retention policies can further reduce discovery exposure. For example, companies could discard chats after 21 days unless the employee is subject to a litigation hold.

Counsel should “direct” the use of AI. Even if a chatbot can’t qualify as a Kovel agent (meaning nonlawyer chats with the bot aren’t privileged), Rakoff suggested that work product protection could apply if counsel directs its client to use AI. Best practice now dictates that counsel instruct clients to use enterprise AI and share the outputs with counsel, laying the groundwork to assert work product protection over AI use in litigation contexts.

Even with these steps, companies should recognize that these boundaries are untested, and nonlawyer chats may not have protection. Employees should be trained to use AI tools primarily for routine matters and to discuss sensitive matters directly with counsel before using AI.

Provide separately represented executives with dedicated AI tools. When a company arranges independent counsel for an officer or director, that individual’s counsel should provision a separate enterprise-grade AI service for the officer’s use in connection with the legal matter.

To protect privilege, counsel should affirmatively direct clients to use designated enterprise-grade AI tools, document that instruction, select which documents and other materials are made available for the client to analyze using the tool, and ensure that all AI-generated outputs are shared with counsel. The record should be clear that AI-generated chats are integral to counsel’s representation and a means of helping counsel communicate more effectively with the client and prepare a defense.

Issue clear Upjohn-style notices. Companies that deploy AI chatbots to assist employees with legal or compliance questions should include a notice stating that the chatbot is a company tool, that the company holds any applicable privilege, and that employees shouldn’t use it for personal legal matters.

This reduces the risk that employees use the chatbot for personal legal questions, which wouldn’t be privileged. If an adversary in litigation discovered the employee’s chatbot use, it could result in discovery requests to the company, potentially requiring the company to search an individual’s chat history to segregate personal use from privileged corporate matters.

Setting clear boundaries also avoids the risk that employees use internal corporate chatbots to ask sensitive questions and later raise Upjohn-type challenges (claiming that they thought the chatbot was acting as their personal counsel) if the company wishes to produce that employee’s chats to government regulators as part of its cooperation. AI models can be instructed to display Upjohn-style reminders at the start of each session and to decline to answer queries that appear to involve an employee’s personal legal questions or exposure, redirecting the employee to seek independent counsel instead.

Provision enterprise-grade AI with a signed data processing agreement. The most basic requirement is that AI tools for legal use must operate under enterprise terms with a signed data processing agreement, a no-training covenant, explicit confidentiality commitments, zero data retention by the AI provider, appropriate internal retention rules, and notice requirements in case the provider receives a legal request for information. Consumer, “team,” “pro,” and “starter” tier products should be presumed inadequate for legal use.

Outlook

Heppner is a narrow ruling on specific facts, but its reasoning illustrates how strictly courts will construe privilege in the age of AI. OpenAI CEO Sam Altman publicly called for “AI privilege” in June 2025, arguing that “talking to an AI should be like talking to a lawyer or a doctor.” As Heppner makes clear, that aspiration is far from the current legal reality.

For enterprises, the decision underscores the need to review AI deployment plans, verify governing terms, and implement the safeguards needed to preserve privilege in an AI-enabled workplace. The technology offers significant benefits in legal matters and isn’t fundamentally incompatible with privilege, but the legal terms must be right.

The case is United States v. Heppner, S.D.N.Y., No. 25 Cr. 503 (JSR), opinion filed 2/17/26.

This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners.

Author Information

Jed Schwartz is a partner in the New York office of Milbank and a member of the firm’s litigation and arbitration group.

Yaakov Sheinfeld is a partner in the New York office of Milbank, the head of the firm’s technology committee, and a member of the firm’s real estate group.

John Hughes is a special counsel in the New York office of Milbank and a member of the firm’s litigation and arbitration group.

Write for Us: Author Guidelines

To contact the editors responsible for this story: Daniel Xu at dxu@bloombergindustry.com; Melanie Cohen at mcohen@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

See Breaking News in Context

Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.

Already a subscriber?

Log in to keep reading or access research tools and resources.