Heppner and Warner: Redefining Attorney-Client Privilege, Work Product Doctrine in the AI Era

By Dan Regard

March 6, 2026

Heppner and Warner: Redefining Attorney-Client Privilege, Work Product Doctrine in the AI Era

Dan Regard is the CEO & Founder of Intelligent Discovery Solutions, Inc. (iDS). He helps companies solve legal disputes through the smart use of digital evidence. He is the author of “Fact Crashing™ Methodology” and is a contributing author to multiple other books on discovery and eDiscovery.

This is the tenth article of a 10-part series on how technology is transforming evidence, litigation, and dispute resolution. In this installment, we’ll examine two legal cases on AI making their way through the judicial system: United States v. Heppner and Warner v. Gilbarco. Both cases highlight the evolution of legal analysis and drafting, as well as attorney-client privilege and work product doctrine in the AI era.

The legal industry has been stepping on GenAI landmines for more than a year now. The early headlines were about hallucinations, fabricated citations, and confident but incorrect statements of law. Those mistakes were visible, often embarrassing, and easy to understand. At the same time, the market moved in the opposite direction. Legal GenAI tools became financial unicorns, and even the perception of disruption started reshaping law firm economics and vendor economics. In short: GenAI created obvious risks and obvious leverage at the same time.

A newer risk category is harder to spot and potentially more consequential: when GenAI is used for legal analysis and drafting, what happens to the attorney-client privilege and work product doctrine? 

This question has now surfaced in litigation through two opinions that are being discussed together: United States v. Heppner and Warner v. Gilbarco. They are often framed as “AI cases,” but that framing obscures the more practical point Judge Anthony Patti essentially put on the record in Warner: the anxiety is not about AI in particular; it is about writing environments and how modern tools change control, retention, and access.  

A tale of two cases

Heppner arose in a criminal setting, where a defendant used Anthropic’s Claude in connection with litigation preparation and the government sought permission to use AI-assisted writings found on seized devices. The Heppner court rejected claims of attorney-client privilege and work product. The reasoning emphasized that an AI tool is not a person, certainly not legal counsel, and that the defendant’s workflow did not satisfy the requirements for protection. Separately, Heppner focused on confidentiality and the absence of a reasonable expectation that the materials would remain private given how the account was configured and what Claude’s Consumer Terms of Service and Privacy Policy allowed.  

Warner arose in a civil setting, where a pro se plaintiff used ChatGPT in connection with litigation drafting and the defense tried to compel the prompts and related materials. The court denied this motion. The opinion reads like a reminder that the party work product doctrine is not limited to attorney work. It is a protection for litigation preparation, and the actual language in the Federal Rules (FRCP 26(a)(3)(B)) describes it as attaching to materials prepared by or for a party (or the party’s representative). The court also made the waiver distinction explicit: work product protections are not waived as easily as attorney-client privilege, and disclosure concerns generally turn on whether the material has been disclosed to an adversary or in a way likely to reach one.  

For counsel advising clients, the most useful synthesis is not “one judge got AI right, and one judge got AI wrong.” The more durable takeaway is that these outcomes are driven by doctrine and by workflow facts, not by whether a model is named Claude or ChatGPT. Heppner is heavily framed through confidentiality and attorney involvement, and it reflects how fragile privilege can be when the writing environment is treated as open or shared. Warner is centered on work product doctrine and Sixth Circuit waiver principles, and it reflects how work product can survive even when a tool involves hypothetical third-party handling, so long as the disclosure is not to an adversary or likely to reach one.

These cases also help sharpen a practical boundary for day-to-day legal work. The hardest use case is the one both courts were dealing with: litigation analysis and drafting. That is where prompts and outputs can look like strategy, mental impressions, and case theories. By contrast, precedent case summaries can be materially less risky when they are one-way transformations of data you already produced or received. A summarization tool may still create logs, but the risk profile changes when the user is not inputting incremental legal theories, framing, and argument structures into a third-party system.

So, what is the answer for outside counsel? Not whether to use GenAI, but how to use it in a way that matches the protection you intend to claim. The safest approach is to treat GenAI less like a “chat” and more like a writing environment that generates a discoverable footprint. Once you do that, the solutions become familiar. They look like privilege hygiene, vendor management, and information governance now applied to the newest tool in the stack.

The four problem areas

There are four main problem areas and there is an answer for each one. They are:

  • Legal advice: Role clarity and attorney supervision (AI is not counsel; counsel directed workflow)
  • Confidentiality and waiver: Third party exposure and waiver risk (terms, settings, adversary pathway)
  • Content management: Know the data footprint (retention, logs, custodians, preservation and collection)
  • Hallucinations: Accuracy and quality control (verification, cite checking, human review)

The solutions

The first problem is that GenAI is not a lawyer and cannot be allowed to masquerade as one. The legal fix is to design workflows that preserve the attorney’s role as the source of legal advice and legal judgment. This is the “Attorney-In-The-Loop” solution. 

That does not mean counsel must watch someone type. It means the use of the tool should be nested inside counsel’s direction and review when the work involves legal theories, claims and defenses, and the mental impressions that are the very point of work product protection. The technical fix is to treat the system configuration as part of the legal workflow.

The second problem is sharing. Attorney-client privilege and the work product doctrine are exceptions, and those exceptions can be fragile. The legal fix is to assume that any disclosure to third parties might waive client privilege and to structure the tool accordingly, including vendor terms where appropriate and clear internal policies on what categories of legal content may be put into which systems. The technical fix is to use enterprise-grade environments where training on your data is disabled or contractually prohibited, access is role-based (or user-based), and retention is known and configurable. What matters is not whether a subscription is “paid,” but whether the environment is designed to keep client strategy and facts out of third-party reuse and out of casual internal access.

The third problem is content management. In practice, most organizations do not yet have an answer to a simple question: where do your prompts live? If a dispute later asks for GenAI records, what exists, where is it stored, and who controls it? The legal fix is to treat AI outputs and prompts as potential electronically stored information (ESI) and to establish governance rules that match the stakes, including retention, deletion, and export into privileged repositories where appropriate. The technical fix is to map the system like any other data environment: browser versus desktop versus mobile; fat client versus thin client; user workspace versus enterprise workspace; Application Programming Interface (API) versus interactive interface; logs and telemetry; backup and retention; admin access and audit trails. The point is not to eliminate the footprint. The point is to understand it so you can manage it.

The fourth problem is hallucinations and false authority. The legal fix is to treat GenAI as a drafting accelerator, not a truth engine, and to impose a verification discipline appropriate to the filing. When outputs become pleadings, those outputs need the same quality control a junior associate’s draft would get: cite checking, quotation checking, and confirmation of the governing rule in the relevant jurisdiction. The technical fix is to pick tools and settings that reduce error rates, but more importantly to build quality control into the workflow, because no model will ever take the risk to zero.

Conclusion

One year ago, in-house teams were prohibiting outside counsel from using GenAI. Today, they are demanding it and asking firms to explain how it will reduce cost and improve speed.

The opportunity for outside counsel is to lead clients between “no, no, no” and reckless adoption and toward informed, defensible use. The judge’s line in Warner is worth taking seriously: the issue is not AI; it is all modern writing environments and how our ever-changing technology-stack affects confidentiality, retention and sharing. As these tools evolve rapidly, the facts will continue to shift, and the best approach will be the kind that can adapt: doctrine-led, workflow-aware, and technically precise.

Closing thoughts: Join the conversation

This is just one piece of the bigger conversation on the future of evidence. As legal professionals, we need to stay on top of emerging technologies.

Let’s continue the discussion on this LinkedIn post.

Get the free newsletter

Subscribe for news, insights and thought leadership curated for the law firm audience.