Skip to main content

Legal Updates

AI in Legal Practice: Use It, But Own the Work

Date

April 15, 2026

Read Time

4 minutes

Share


Earlier this year in “AI Generated Documents May Not Be Protected by Attorney Client Privilege,” Kathryn Nadro and Elizabeth (Lisa) Vandesteeg discussed the decision in United States v. Heppner and its implications for attorney-client privilege when clients independently use AI tools. Judge Jed Rakoff of the U.S. District Court for the Southern District of New York concluded that materials generated through an AI platform were not protected by privilege because the defendant entered information into a system that expressly disclaimed confidentiality.

A recent decision from the United States Court of Appeals for the Sixth Circuit highlights the other side of the issue: what can happen when attorneys rely on AI tools without adequately supervising the work those tools produce. The decision in United States v. Farris shows that the risks associated with AI are not limited to clients experimenting with these tools on their own. As numerous other courts have pointed out in recent years, lawyers who fail to critically review AI-generated work product can face serious consequences as well.

Some earlier incidents involving AI tools drew attention because the systems fabricated entirely nonexistent cases, an obvious warning sign that something had gone wrong. The problem in Farris was more subtle. The AI tool cited real cases, but the quotations attributed to those cases were fabricated or misrepresented the courts’ holdings. The citations appeared legitimate at first glance, but the authorities did not actually state what the attorney claimed they said. This occurred even though the attorney’s briefs were generated using Westlaw’s AI platform CoCounsel, which had been marketed to lawyers as avoiding the classic “hallucination” problem associated with generative AI systems.

The court’s first clue that something was wrong came from the file name on the submitted brief: “CoCounsel Skill Results.” The attorney had not even renamed the document before filing it with the court. A closer look revealed deeper problems. Several quotations attributed to judicial decisions did not appear in the cited opinions. In at least one instance, a case was presented as supporting the defendant’s argument even though the decision actually reached the opposite conclusion.

The attorney later acknowledged that a staff member uploaded documents into the AI system to generate a draft brief. He then edited the draft for several hours before filing it, but he did not independently verify the citations or quotations. The consequences were significant. The court removed the attorney from the case, denied compensation for his work, and referred the matter for potential disciplinary proceedings.

The Sixth Circuit’s message was clear: AI tools are not inherently problematic, but attorneys remain responsible for everything filed with a court. Attorneys must still follow the same ethical and court rules that have bound them for decades, despite the innovations offered by AI.

When read together, Heppner and Farris illustrate two versions of the same failure. In Heppner, the client used AI without the attorney to generate legal strategy. In Farris, the attorney used AI without meaningfully exercising legal judgment over the result. In both situations, no one with trained legal judgment was truly in control of the technology.

Neither decision stands for the proposition that AI has no place in legal practice. Both courts acknowledged the potential value of these tools. Used responsibly, AI can help lawyers accelerate drafting, identify relevant authorities, and synthesize complex information far more efficiently than traditional methods alone.

But responsible use requires something simple. As always, lawyers must remain accountable for the work. As Heppner makes clear, it also means ensuring that client information is used only within confidential, attorney-directed systems rather than public AI tools that disclaim privacy protections or train external models. And as Farris illustrates, that means verifying every citation and legal proposition before anything is filed with a court.

The Farris holding underscores the risks of ignoring guidance on responsible AI use. In imposing sanctions, the court pointed, with notable frustration, to years of warnings from it and other authorities about attorneys’ ethical obligations when using AI. It also emphasized the significant court time and resources wasted addressing the issue. The technology itself is not the risk. The absence of human judgment is.

If your legal team is using AI thoughtfully, verifying the output, protecting confidential information, and exercising independent judgment, that is simply good lawyering. Clients play a role in that process as well. Taking attorney work product and running it through outside AI platforms can undermine confidentiality just as easily as generating it there in the first place.

AI may help produce the first draft. But the responsibility for the final product still belongs to the lawyer whose name is on the brief.

Questions about your firm’s approach to using AI in legal work? Reach out to Benjamin Altshul, Kathryn Nadro, and Ashley Roeser, or another member of the LP team.


Filed under: Cybersecurity, Corporate, Financial Services & Restructuring, Real Estate

February 25, 2026

AI-Generated Documents May Not Be Protected by Attorney-Client Privilege

Read More