Let’s talk about privilege, privacy, and AI-assisted legal review

Lawyers and legal teams are under intense pressure to move faster, and AI gives us some serious advantages. It can triage a document dump, summarize a messy fact pattern, draft a timeline, identify issues, and build a first-cut strategy with a prompt and a button click. Generative AI can feel like a superpower for that kind of work. In a lot of ways, it is.
This productivity boost does not mean green-light go, however. Attorney-client privilege is a confidentiality doctrine. If you disclose privileged information to a third party in a way that isn’t necessary to obtain legal advice (and isn’t protected by an appropriate confidentiality relationship), you can waive it. And in 2026, the “third party” isn’t just your colleague or your consultant or that person you accidentally cced on that email to counsel. It can be an AI vendor and the systems behind it.
A recent federal case in the Southern District of New York put this question into sharp focus, and it’s a useful (and slightly terrifying) reminder that privacy, vendor terms, and privilege are now entangled.
In United States v. Heppner, the Government sought a ruling that certain documents created by the defendant using an AI tool were not protected by attorney-client privilege or work product. The documents at issue were generated after the defendant interacted with a third-party generative AI system (Anthropic’s Claude), and later sent those AI-generated materials to counsel.
On February 10, 2026,Judge Jed Rakoff ruled from the bench that the AI-generated documents were not privileged, and also not protected as work product. He has not yet issued a written opinion on the decision, but the decision (as summarized publicly) turned on a few practical points that matter far beyond just this individual case, including who created the materials, for what purpose, and whether the communications were actually confidential.
The Government’s briefing also flagged something many teams still underestimate, and that is that AI tool terms and privacy policies can undermine any “reasonable expectation” of confidentiality.
You don’t need to be in criminal litigation for this to matter. The reasoning maps cleanly onto common in-house scenarios, including development of privileged investigative notes, privileged incident response communications, privileged contract negotiation strategy, privileged regulatory analysis, internal legal risk memos, and more.
Here are the core privilege pressure points that cases like Heppner bring to the surface.
A lot of people treat AI like a private notebook. They type, it answers, end of story. But from a privilege standpoint, the question becomes more complex. Did you disclose privileged information to a third party under circumstances consistent with confidentiality?
In Heppner, the Government argued (and the court reportedly agreed) that using a third-party AI tool did not preserve confidentiality, emphasizing the tool’s governing terms and the reality that the user is sharing information outside the privileged attorney-client relationship.
Practically speaking, if your AI workflow depends on a provider’s policies that allow retention, training, or disclosure, it becomes much harder to argue the communications were confidential.
Privilege is not a “forward-to-counsel to bless it” doctrine. If a communication is made to a third party first (without a structure that preserves privilege), sending the output to counsel afterward may not cure the problem.
That was essentially the posture at issue in Heppner. The documents were created via a third-party AI tool and later provided to counsel, and the Government argued that the step of sending them to lawyers did not retroactively confer privilege. Sure, maybe the lawyer’s response to the documents was privileged, but the underlying documents were not part of that shield.
Work product is different than attorney-client privilege, but it can run into similar problems. Work product generally turns on whether materials were prepared because of anticipated litigation and often whether they were prepared by or for counsel.
In Heppner, the court also rejected work product protection, focusing (as summarized) on the fact that the defendant created the materials on his own initiative and not at counsel’s direction.
Even when work product could apply, disclosure to third parties can still weaken protection depending on the circumstances. The lesson is the same. You need to treat AI as a real disclosure channel with real risk.
What makes AI different from a word processor isn’t just that it “generates.” It’s that you may be sending sensitive inputs and receiving sensitive outputs through a vendor system with its own retention, security, and disclosure rules.
In the Heppner briefing, the Government pointed to provisions suggesting the AI vendor could use prompts/outputs for purposes like maintaining and improving services, and could disclose information to third parties including public authorities or law enforcement in certain circumstances.
That matters because it’s not just privilege at stake:
• Privacy: prompts can contain personal data, sensitive personal data, incident details, employee information, customer data, trade secrets, and internal strategy.
• Security: prompts can embed system names, logs, architecture details, or vendor relationship maps.
• Discovery and subpoenas: if the provider stores the data (even briefly), it can be subject to legal process.
From a governance standpoint, a hard truth emerges: if you don’t know your AI vendor’s retention and disclosure posture, you don’t know your confidentiality posture.
Not at all. It does mean, however, that you should not treat AI like a magic black box and start treating it like what it is:
• a third-party service provider,
• processing sensitive information,
• under contract terms that can be privilege-relevant, and
• and likely sitting inside your broader privacy/security obligations.
The legal system has not yet made a special new “AI privilege” to make this easier, and we would speculate that the courts will continue to be narrow in their interpretation of privilege. Legal commentary has been explicit that privileges are narrow carve-outs tied to human, duty-bound relationships, and AI tools generally do not carry the fiduciary duties, enforceable confidentiality, and professional accountability structures that underpin traditional privileges.
So if you want the benefits of AI without blowing a hole in privilege, you need to design the workflow to preserve confidentiality.
Here’s a practical, non-exhaustive set of guardrails legal teams can use to reduce risk. Think of this as “Resilience by Design” applied to privilege.
Many “consumer” AI experiences are not designed for privileged workflows. The safest direction (when you can’t avoid AI use) is typically to use enterprise-grade offerings or private deployments with contractual commitments around no training on your content, limited retention or configurable retention, strong access controls, audit logging, DPA and security addendum, breach notification obligations, and clear subprocessor visibility.
Even then, be sure to read the terms. Your legal position may depend on them.
Assume prompts and outputs can be sensitive records. Don’t paste raw privileged email chains, investigative memos, or incident communications when a de-identified summary would do.
A practical approach to take would be to strip names and identifiers, abstract the facts into a hypothetical where feasible, and avoid uploading attachments unless the workflow is explicitly designed for it.
If you want the strongest argument that AI use is part of legal advice workflow, design it like other protected legal support functions:
• Make the activity counsel-directed and document the purpose.
• Use tools and vendors that contractually support confidentiality.
• Limit access and distribution.
• Keep outputs inside the privileged matter workspace.
This won’t guarantee privilege—but it moves you away from “random third party disclosure” and toward “controlled legal support process.”
The Government briefing in Heppner leaned hard on terms and privacy policy language, and the court’s ruling (as summarized) emphasized the lack of confidentiality expectation.
So treat AI vendor diligence as a legal risk issue, not just a procurement checkbox. Ask:
• Do they retain prompts/outputs?
• Do they use them for training?
• What are their disclosure obligations to authorities?
• What security controls exist (SOC 2 Type II, ISO 27001, etc.)?
• Where is data processed (cross-border implications)?
• Who are subprocessors?
• Can you turn off logging/retention?
A common failure mode is that legal teams address privilege after the AI use has already happened.
Instead, do the following:
• Include AI usage guidance in legal/incident playbooks.
• Train teams on what not to paste into tools.
• Establish escalation rules (“If privileged, use approved tool only”).
• Create an audit trail of which tools are approved for what categories of work.
The Heppner ruling is not just a courtroom curiosity. It’s a preview of a world where AI-assisted work can become discoverable, non-privileged, or privacy-risky if teams treat AI tools like private confessional booths or like a junior associate to the business unit.
Privilege has always required operational discipline. In the AI era, that discipline includes vendor terms, retention settings, approved workflows, and staff training.