Legal Intelligencer: Discovery Risks of ChatGPT and Other AI Platforms

In the August 21, 2025 edition of The Legal Intelligencer, Kelly Lavelle writes, “Discovery Risks of ChatGPT and Other AI Platforms.”

OpenAI CEO Sam Altman recently warned that ChatGPT conversations are not legally protected and can be used as evidence in court. Speaking on a podcast, Altman acknowledged that OpenAI is legally required to retain user chats, including deleted ones, due to a current court order discussed later in this article. Comparing AI conversations to those with doctors, lawyers, or therapists, Altman argued that similar confidentiality protections should exist but currently do not, leaving sensitive exchanges with public AI tools fully exposed to discovery, an issue he described as needing to be addressed with urgency.

The use of AI tools like ChatGPT and Claude has created new issues for the discovery process. Lawyers must recognize that AI queries and outputs may qualify as electronically stored information (ESI) under both federal and state discovery rules. As AI technology becomes more integrated into legal practices, discovery requests are beginning to target the use of these technologies, seeking access to AI-generated documents, search histories and communication logs.

Many users may view AI tools as private assistants rather than potential witnesses. However, the use of AI tools like ChatGPT and Claude can inadvertently expose sensitive information, including legal strategies and privileged facts. Users may not realize that third-party AI platforms can be compelled to produce records during litigation, potentially compromising confidentiality and privilege. Many platforms keep detailed logs that include prompts and generated output, often tied to user accounts. Users may be surprised to learn that if they have employed these tools to draft documents or summarize confidential facts, those entries may be discoverable. This misunderstanding can create serious risks.

Several recent cases illustrate the discovery concerns associated with AI use. In some instances, AI-related ESI has been sought in discovery, challenging both privilege and work product protections. These cases highlight the importance of understanding the discovery implications of AI use and the need for protective measures. The discoverability of ChatGPT searches and their status as non-privileged information in legal proceedings may depend on the context in which the searches are conducted and the applicable privileges.

In some cases, courts have found that AI prompts and outputs, particularly when drafted by counsel, can constitute attorney work product. In Tremblay v. OpenAI, No. 23-cv-03223-AMO, 2024 WL 3748003, at *2-3 (N.D. Cal. Aug. 8, 2024), the court held that prompts created by attorneys reflected their mental impressions and opinions, making them work product rather than mere factual material. Applying that same reasoning, a California district court recently concluded that certain prompts, settings, and outputs from the Claude AI model were likewise protected, rejecting the argument that such materials were automatically discoverable. See Concord Music Group v. Anthropic PBC, 2025 WL 1482734, at *1 (N.D. Cal. May 23, 2025). The court emphasized that these materials were generated during counsel’s investigative process and therefore qualified for work product protection. However, the court also recognized that work product protection can be waived when a party relies on such materials in pleadings or motions. Here, the plaintiffs used specific prompts and outputs in their complaint and preliminary injunction filings, producing nearly five thousand prompt-output records that they relied upon. Under the “fairness principle,” this created a limited waiver. However, the court refused to extend it to all prompts, settings, and outputs, finding that discovery requests for every AI interaction were overreaching and not “closely tailored” to the opposing party’s legitimate needs nor limited to what is necessary under the fairness principle.

On the other hand, and perhaps the most sweeping example is the ongoing New York Times v. OpenAI lawsuit, which alleges that OpenAI unlawfully used millions of Times articles to train its AI models, including ChatGPT. In connection with that case, on May 13, 2025, Magistrate Judge Ona T. Wang ordered OpenAI to “preserve and segregate all output log data that would otherwise be deleted on a going forward basis,” a directive affecting hundreds of millions of ChatGPT users. See In re OpenAI Copyright Infringement Litigation (relating to The New York Times v. Microsoft, 23-cv-11195), No. 25-md-3143 (SHS) (OTW), ECF No. 551, at 2 (S.D.N.Y. May 13, 2025). OpenAI objected, arguing the order forced it to “disregard legal, contractual, regulatory, and ethical commitments” and retain up to 60 billion conversations, of which the plaintiffs estimated only 0.006% might be relevant. See OpenAI Defs.’ Supplemental Opp’n to News Pls.’ Mot. Regarding Output Logs , No. 25-md-3143 (SHS) (OTW) (S.D.N.Y. May 23, 2025), ECF No. 578. The district judge rejected those arguments, affirming the order and noting that OpenAI’s own terms of use allowed preservation for legal requirements. See Order, No. 25-md-3143 (SHS) (OTW), ECF No. 712 (S.D.N.Y. June 26, 2025). Although the court later clarified that certain categories were excluded, the case highlights how deleted AI data can become subject to preservation and discovery.

Lawyers should address these concerns and advise clients at the outset that interactions with AI tools may be discoverable and should be treated accordingly. This means instructing clients not to input privileged or confidential strategy into public or unsecured AI platforms. Clients should understand that the same caution they would apply to an email applies to AI prompts.

Further, the integration of AI tools like ChatGPT into legal practices necessitates a careful consideration of discovery obligations. Firms should adopt clear policies governing the use of AI tools for litigation tasks. These policies should address acceptable uses, data protection measures, and procedures for handling AI-generated content.

Kelly A. Lavelle is an associate at Kang Haggerty. She focuses on e-discovery and information management, from preservation and collection to review and production of large volumes of electronically stored information. Contact her at klavelle@kanghaggerty.com.

Reprinted with permission from the August 21, 2025 edition of “The Legal Intelligencer” © 2025 ALM Global, LLC. All rights reserved. Further duplication without permission is prohibited. Request academic re-use from www.copyright.com. All other uses, submit a request to asset-and-logo-licensing@alm.com. For more information visit Asset & Logo Licensing.

Contact Information