A landmark ruling in United States v. Heppner (February 2026) has officially drawn the line: Confidentiality doesn’t exist when you’re talking to a public AI.
The federal court ruled that a defendant’s conversations with the AI assistant, Claude, were not protected by legal privilege. This decision isn’t just about one criminal case—it’s a warning for every professional using generative AI.
The Case: United States v. Heppner
Bradley Heppner, facing federal fraud charges, used a public version of Claude to outline his legal defense and research strategy. When the government seized his devices, Heppner claimed these AI documents were protected by attorney-client privilege.
Judge Jed S. Rakoff disagreed. Here is why the court ruled against him:
The “Third-Party” Waiver: Confidentiality is the foundation of privilege. Because Heppner used a public AI platform, he agreed to Terms of Service that allowed the provider (Anthropic) to collect, review, and even share data with authorities. By clicking “Enter,” he effectively invited a third party into his legal strategy.
AI Is Not a Lawyer: Privilege requires a fiduciary relationship between a human client and a human attorney. The court was blunt: An AI tool cannot provide “legal advice,” and a conversation with a chatbot is legally no different than a Google search.
No Retroactive Shield: Heppner eventually sent his AI reports to his actual lawyers, but the court ruled that sending a non-privileged document to an attorney does not make it privileged.
How to Protect Your Professional Work
If your work involves sensitive data, trade secrets, or legal strategy, follow these “Heppner-Proof” rules:
- Audit Your Settings: If you use Claude Pro or ChatGPT Plus, go to your Privacy Settings and disable model training. This prevents your data from being used to improve future AI versions, though it doesn’t solve every legal risk.
- Use Enterprise-Grade Solutions: Consumer plans are not enough for legal or corporate sensitive work. Enterprise AI solutions come with Data Processing Agreements (DPAs) and contractual guarantees that your data is never reviewed or shared.
- The “Crowded Library” Rule: Never type anything into a public AI tool that you wouldn’t say out loud in a crowded room. If the information is meant for your attorney’s eyes only, keep it off the public platform.
The Bottom Line
The technology has changed, but the law hasn’t. Privilege depends on who you share information with and how you protect it. Don’t let a chatbot be the reason you lose your legal protections.