AI-Generated Documents May Not Be Protected by Attorney-Client Privilege
Date
February 25, 2026
Read Time
3 minutes
Share
In what appears to be the first decision of its kind, a federal judge has ruled that documents a client prepared using a commercial AI tool and then shared with his attorney are not shielded by attorney-client privilege or the work product doctrine. The February 10, 2026, ruling by Judge Jed Rakoff of the U.S. District Court for the Southern District of New York signals a significant and largely uncharted risk for clients who turn to AI tools while navigating legal proceedings.
The case centers on Bradley Heppner, who was arrested in November 2025 on charges of securities and wire fraud. Federal agents searching his home seized electronic devices containing approximately 31 documents that Heppner had generated using Anthropic’s AI tool Claude. After retaining legal counsel and receiving a grand jury subpoena, Heppner had used Claude on his own initiative to prepare reports outlining his defense strategy and potential legal arguments. He later transmitted those reports to his attorneys.
His defense team argued the documents were privileged, describing them as a means of consolidating the client’s thoughts for the purpose of communicating with counsel. The government pushed back, contending that sharing information with a commercial AI tool, which operates under terms of service explicitly disclaiming user confidentiality, constitutes a disclosure to a third party that destroys any claim of privilege.
Judge Rakoff agreed with the government. Ruling from the bench, he held that the attorney-client privilege did not apply because Heppner had shared the communications with an AI tool that did not maintain confidentiality. The applicable Anthropic privacy policy noted that user prompts could be used to train its model and might be disclosed to government authorities and third parties, a provision the court found fatal to the privilege claim.
The work product doctrine fared no better. That doctrine protects materials prepared by or at the direction of legal counsel in anticipation of litigation. Because Heppner acted on his own initiative, not at his lawyers’ direction, and because neither Heppner nor the AI tool constitutes legal counsel, Judge Rakoff declined to extend that protection as well.
The decision has immediate practical implications for anyone using AI tools in connection with legal matters. Legal observers suggest that enterprise-grade AI tools, which contractually commit to not training on user inputs and to maintaining input confidentiality, may be viewed differently by courts and could better support privilege claims. Time will tell. For now, clients and non-lawyers assisting counsel should also make clear in their AI prompts that they are acting at a lawyer’s direction, and privilege logs should explicitly identify the AI tool used and the basis for any confidentiality expectation.
Judge Rakoff has not yet issued a written opinion, but the bench ruling alone is likely to prompt law firms and their clients to revisit their AI usage policies, particularly the choice between consumer and enterprise AI platforms, before the next high-stakes matter lands in court.
Questions about how this ruling may impact you? Reach out to Lisa Vandesteeg, Kathryn Nadro, or another member of LP’s AI & Technology Team.