There's a specific kind of silence that descends on a managing partner when they realize a risk they didn't know existed has already materialized. It's not panic. It's something quieter and more dangerous, that particular stillness of comprehending that the exposure isn't hypothetical, it's structural, and it's been accumulating for months in the daily workflows of their own attorneys. That's the silence that should follow what two federal courts in the Southern District of New York have now made plain: your clients' most sensitive communications, their defense strategy, their privileged disclosures, their legal thinking, may already be sitting in OpenAI's servers, subject to compelled production in litigation your firm has no part in, under a preservation order your firm didn't know existed and couldn't have stopped.
The Preservation Order That Changed Everything
Here's what happened. On May 13, 2025, U.S. Magistrate Judge Ona T. Wang issued a pivotal preservation order in In re: OpenAI, Inc. compelling OpenAI to preserve all output log data that would otherwise be deleted, whether at the request of users or under international privacy laws. (Hodder Law) Let that land. User deletion requests, the assurance your associates and partners think protects them when they clear their chat history, became legally meaningless overnight. OpenAI's standard policy schedules deleted ChatGPT chat logs for permanent deletion within 30 days. Under the court order, even chats that are manually deleted by users are being maintained on OpenAI's servers rather than being automatically deleted. (Harris Beach Murtha) This affects ChatGPT Free, Plus, Pro, and Team subscribers. ChatGPT Enterprise users received a narrow carve-out, but the attorneys and staff at most firms aren't using Enterprise. They're using whatever's convenient, whatever's fast, whatever gets the draft done before the deadline.
From Preservation to Production: 20 Million Logs
The story didn't end with preservation. It escalated directly into production. News plaintiffs initially requested 120 million ChatGPT logs from the tens of billions of OpenAI logs that it has preserved. OpenAI countered with 20 million, arguing that was "surely more than enough." The plaintiffs agreed. Then, OpenAI changed course in October 2025, proposing to run keyword searches and produce only conversations that implicated plaintiffs' specific works. Judge Wang sided with the news outlets, and denied reconsideration in December. (NatLawReview) Courts treat AI chats as discoverable business records, not privileged communications like attorney-client conversations. There is no special AI privilege. (Terms) Twenty million anonymized chat logs, pulled from tens of billions preserved under that May order, now headed toward plaintiffs' experts for forensic analysis. The critical word is "anonymized." Courts considered it adequate protection. Your clients may disagree vigorously, once they understand what it means for their most sensitive disclosures to travel through a de-identification process controlled by the company that holds the data, reviewed by experts working for adverse parties, under a protective order that has already been tested and challenged and still didn't stop production.
United States v. Heppner: AI Conversations Are Not Privileged
Then came United States v. Heppner, and the ground shifted again. On February 10, 2026, Judge Jed S. Rakoff of the Southern District of New York addressed what he called "a question of first impression nationwide" and ruled that written exchanges between a criminal defendant and the generative AI platform Claude were not protected by attorney-client privilege or the work product doctrine. (Harvard Law Review) The facts are worth understanding precisely, because they mirror what happens every day inside law firm walls. Bradley Heppner, charged with securities fraud, wire fraud, and falsification of records, had received a grand jury subpoena and retained counsel. Of his own volition and without attorney involvement, Heppner used Claude to prepare reports outlining his defense strategy and potential legal arguments, then shared those materials with his lawyers. (Chapman and Cutler LLP) His attorneys logged the documents as privileged. The government moved to compel. Judge Rakoff granted the motion from the bench. The government argued, and Judge Rakoff agreed, that sharing privileged communications with a third-party AI platform may constitute a waiver of the privilege over the original attorney-client communications themselves. (Jones Walker LLP) The FBI seized the documents during a search of Heppner's home. His defense strategy, prepared in anticipation of indictment, prepared with the intent of communicating it to counsel, became Exhibit A for the prosecution.
The Shadow AI Problem Inside Your Firm
Managing partners need to sit with the full architecture of that risk, because Heppner isn't just a story about a criminal defendant making a bad technological choice. Inputting confidential client information into public GenAI platforms may constitute disclosure to a third party, which can result in waiver of the attorney-client privilege, particularly if the information is not adequately protected or the terms of service allow the provider to retain or use the data. Many GenAI tools store user inputs for model training or quality improvement. Without disabling these default settings or using paid and enterprise versions with stronger privacy protections, attorneys risk involuntarily exposing privileged content. (Frantz Ward LLP) The ABA made this explicit in Formal Opinion 512, issued July 2024, its first formal guidance on generative AI in legal practice, which requires attorneys to protect client information as a foundational ethical obligation regardless of what tool they're using to do the work. According to the 2024 ABA Legal Technology Survey, while AI adoption among lawyers has nearly tripled from 11% in 2023 to 30% in 2024, many firms still lack the infrastructure to manage the associated risks. Shadow AI usage is rampant. Associates use ChatGPT on personal devices, partners experiment with AI writing tools, and paralegals might use online AI for quick spell-checks of confidential memos. (LeanLaw) That last one deserves particular attention. A paralegal spell-checking a confidential memo. The privileged information travels to OpenAI's servers. The memo lives in OpenAI's preservation hold. A court orders production of twenty million logs. No one at the firm knew it was happening.
The Terms of Service You Already Accepted
Anthropic's policy expressly states that user prompts and outputs may be disclosed to "governmental regulatory authorities" and used to train the AI model. OpenAI's privacy policy contains comparable provisions permitting data use for model training and disclosure in response to legal process. Both Anthropic and OpenAI use conversations from free and individual paid plans for model training by default. Users can opt out, but opting out of training does not eliminate the platforms' rights to disclose data to government authorities or in response to legal process. (Jones Walker LLP) This is the contractual reality your attorneys accepted when they created their accounts, clicked through the terms of service, and started working. OpenAI CEO Sam Altman warned that ChatGPT conversations are not legally protected and can be used as evidence in court, acknowledging that OpenAI is legally required to retain user chats, including deleted ones, due to the court order. (Kang Haggerty LLC) The CEO of the company whose platform your attorneys are using to process client matters has publicly acknowledged that no confidentiality protections exist. He's called this a problem that needs to be addressed. He's described it as urgent. The urgency is warranted. The architectural fix he's waiting for doesn't exist yet in cloud AI. It exists in local AI infrastructure, where the prompts never leave the building, the logs are yours alone to control, and no federal preservation order can reach data that was never transmitted to a third party's servers in the first place. The question for every managing partner reading this is simple and serious: do you know what your attorneys typed into ChatGPT last week, and do you know where that text is living right now?
