Key points:
- Judge Jed Rakoff (SDNY) ruled that documents generated via the AI platform Claude are protected by neither attorney-client privilege nor the work product doctrine — a first-of-its-kind decision nationwide
- Three independent grounds doomed the privilege claim: AI is not an attorney, communications with public AI platforms are not confidential under their privacy policies, and the client used the tool on his own initiative rather than at counsel's direction
- In-house legal teams must establish clear AI usage policies immediately — inputting privileged information into public AI tools may constitute a waiver, and no subsequent sharing with counsel can retroactively restore that protection
A federal judge in Manhattan has issued what appears to be the first ruling of its kind in the United States: documents created by a criminal defendant using the AI platform Claude are protected by neither the attorney-client privilege nor the work product doctrine. The February 17, 2026 decision by Judge Jed S. Rakoff of the Southern District of New York in United States v. Heppner carries immediate and far-reaching consequences for corporate legal departments and their outside counsel.
Bradley Heppner, a corporate executive charged with securities and wire fraud, received a grand jury subpoena and, on his own initiative, used Claude to generate approximately 31 documents — including draft defense strategies and legal arguments — in anticipation of indictment. FBI agents seized those documents during a search of his home. His defense counsel asserted privilege, arguing Heppner had used Claude to process information received from counsel and later shared the outputs with his legal team.
Judge Rakoff rejected each argument on three independent grounds. First, the court found that Claude is not a lawyer. Recognized privileges require a trusting human relationship with a licensed professional subject to professional discipline and fiduciary duties — a relationship that cannot exist between a user and an AI platform. Second, the court held that communications with Claude were not confidential: Anthropic's privacy policy explicitly permits the collection and disclosure of user inputs and AI outputs to governmental and regulatory authorities, meaning users have no substantial privacy interest in those exchanges. Third, Heppner used Claude of his own volition, not at his attorney's direction; what matters for privilege is whether the client sought legal advice from Claude, not whether he subsequently shared Claude's outputs with counsel.
The work product doctrine fared no better. Even assuming the documents were prepared in anticipation of litigation, defense counsel confirmed they were not created at its direction and did not reflect counsel's legal strategy at the time of creation. The court reaffirmed that work product protection exists to preserve a zone of privacy in which a lawyer prepares and develops legal theories — a purpose not served by a client's independent AI-assisted research.
As commentators at Husch Blackwell have noted, the ruling draws a bright line for the AI era: a client's chats with public generative AI tools are simply not privileged, regardless of the legal content of those chats. The implications extend beyond criminal defense — any employee at a corporation who queries a public AI platform about an internal legal matter may be exposing the company's privileged information without knowing it.
Judge Rakoff was direct in his conclusion: "AI's novelty does not mean that its use is not subject to longstanding legal principles, such as those governing the attorney-client privilege and the work product doctrine." Bloomberg Law has flagged that the open question now is whether the same analysis applies when attorneys - not just clients - use public AI platforms in their own legal work.
For legal teams, the practical takeaways are urgent:
- Privilege cannot be created retroactively — sharing non-privileged AI-generated documents with counsel after the fact does not render them protected.
- AI platforms disclaim providing legal advice and recommend consulting qualified attorneys; courts will hold clients to that disclaimer.
- Terms of service for public AI tools commonly reserve the right to disclose user data to government authorities; users are deemed on notice of this.
- For work product protection to apply, AI use must be directed by counsel and must reflect counsel's developing legal strategy.
Corporate legal departments that have not yet implemented formal AI usage policies should treat this ruling as a catalyst for immediate action. The risk is not theoretical: as Heppner demonstrates, documents generated through routine AI queries can end up in prosecutors' hands.









