It is 9:23 p.m. on a Sunday. Your SVP of Engineering is preparing for a hard Monday. He has an engineer he needs to manage out. Performance has slipped for two quarters, the team has lost confidence, and HR wants a paper trail before anyone is walked to the door. He opens Claude on his laptop, signs into his Claude work account, and starts typing.
He types that the engineer has two kids under five. He types that the engineer disclosed a mental health condition to HR four months ago. He types that the engineer recently asked about intermittent FMLA leave. He types that he, the SVP, is worried about a wrongful termination claim, and he wants the AI’s help documenting the performance issues “in a defensible way.” He pastes in three months of Slack messages, the last two 1:1 notes, and a Jira export. He asks for a Performance Improvement Plan. He asks for talking points. He asks how to sequence the termination.
Thirty seconds later he has it. He copies the PIP into a Google Doc, sends it to HR, closes the tab, and goes to bed.
Two months later a plaintiff’s lawyer serves you with a discovery request. “All communications, documents, electronically stored information, and data compilations referring or relating to the termination of [engineer].” The request names AI platforms by brand. Your outside counsel forwards the list to IT. IT forwards it to the SVP. The SVP remembers the Sunday-night chat. [S: every company has this transcript.]
This is the story of every company in 2026. It is happening right now at yours.
What just happened
Three things worth naming.
A document was created. The chat transcript exists. It sits on a server with timestamps, account metadata, and IP logs. It is not a thought. It is not a whiteboard session. It is an electronically stored document authored by a company officer on a subject the company is now being sued over.
The protected-class facts and the termination plan are linked in a single record. In the same transcript, in the same session, under the same timestamp, your SVP wrote down a disability disclosure, a recent FMLA inquiry, and a plan to end employment. A plaintiff’s lawyer dreams of that exhibit. They do not have to prove the inference. The document proves it on its own.
The chat is discoverable, and it is not privileged. Federal Rule of Civil Procedure 34 sweeps ESI broadly. “Any designation of data or data compilations” is the phrase. Chat transcripts are no different from emails, Slack messages, Teams DMs, or texts. The AI provider is a third party, not your lawyer. The “brainstorm” frame does not create a privilege that the law does not recognize.
This is a competent leader trying to do a hard job carefully at 9:23 p.m. on a Sunday. The interface looks private. None of it is.1
Discovery does not care what you called it
A “chat” is a document the minute it is typed. The case law on ESI has been stable for 15 years. Format is irrelevant. Container is irrelevant. Intent at the moment of creation is irrelevant. If it is stored, it is producible.
The provider-side mechanics are not complicated either. A litigant subpoenas the company. The company, under its document preservation obligations, must preserve and produce relevant ESI, including transcripts from any AI platform the company has licensed. If the account is personal, the litigant subpoenas the provider directly, and providers respond to lawful process. Retention varies. Zero-retention configurations help. They do not help for conversations already saved, projects already created, or logs already written to a disaster-recovery tier.
Your internal “we delete everything after 30 days” policy is not a get-out-of-discovery card. The moment litigation is reasonably anticipated, a litigation hold attaches, and routine deletion becomes spoliation. Courts sanction for that.
Four ways an AI chat becomes a plaintiff’s exhibit
The damaging admission. People type things into a conversational AI they would never put in an email. The interface feels private, ephemeral, consequence-free. It is none of those things. “I want to manage him out” in an email draft gets edited before it is sent. “I want to manage him out” in a Claude prompt is saved as written, the first time, forever.
The protected-class adjacency. Litigation-risk exposure is rarely a single line. It is the juxtaposition. In one prompt, your manager named a protected characteristic and the adverse employment action in the same paragraph. No amount of pretext-building downstream unwinds that paragraph once it is in a production set.
The contemporaneous record. Emails are narrow and deliberate. People draft them, reread them, cut the line that sounds bad, and send a version they are willing to see again. A chat transcript is none of those things. It is thinking out loud, keystroke by keystroke, timestamped to the second, preserved exactly as typed. A Sunday-night AI session two weeks before the termination, with the manager articulating the plan in his own unedited words, is the most devastating kind of contemporaneous evidence in an employment case. Juries understand timestamps. They understand raw transcripts even more. [S: emails get edited. Prompts don’t.]
The non-privileged third party. Attorney-client privilege protects communications between lawyer and client made in confidence for the purpose of legal advice. Claude is not your lawyer. ChatGPT is not your lawyer. Disclosure of otherwise-privileged material to a non-agent third party waives privilege on the subject matter. Even inside a law firm, using a consumer-grade AI without a privilege-preserving contractual and technical posture is a colorable waiver argument that adversaries will make.
Five scenarios happening right now at your company
Performance management is the obvious one. It is not the only one.
1. The severance brainstorm. A department head drafts a separation pitch in Claude, naming the employee’s age, tenure, stock vesting, and recent complaints to HR about a supervisor. The transcript surfaces in a retaliation claim.
2. The internal investigation thought partner. An HR leader pastes anonymized (but re-identifiable) harassment allegations into an AI to “think through next steps.” The pasted narrative matches the complainant’s sworn statement. The chat shows the company’s mental model of the allegation before the investigation ever began.
3. The board-member bad-behavior triage. A GC uses an AI to stress-test whether a director’s conduct requires disclosure, paste in the underlying facts, and generate pro/con arguments. The chat is later subpoenaed in a securities claim. The pro/con list becomes the roadmap of what the company considered and chose not to disclose.
4. The EEOC or regulator response prep. A manager drafts a response to a charge in a consumer AI, including the company’s candid view of the facts. The charge goes to litigation. The candid version, not the filed version, is what the plaintiff wants, and exactly what the plaintiff asks for.
5. The deal-side disparate-impact analysis. Corp-dev uses an AI to model layoff scenarios after a transaction, feeding in headcount data with demographic fields. The scenarios are run before counsel is involved. The chat shows disparate impact was foreseeable, analyzed, and proceeded past anyway. That is not the fact pattern you want on the other side of a class certification motion.
None of these are edge cases. All of them are happening weekly inside every mid-sized company with AI on employee laptops.
If your company is heading toward litigation, the AI prompt history is among the first things plaintiffs' counsel will request.
Talk to a Talairis attorney →Privilege is a trap, not a shield
Three specific misunderstandings worth clearing up.
“I was just talking to the AI like a lawyer.” Privilege requires a lawyer. It also requires a lawyer acting in a professional capacity, on the client’s behalf, for the purpose of providing legal advice. The AI satisfies none of those elements. Consulting the AI is not consulting counsel, even if the advice comes back in lawyerly prose.
“My lawyer used the AI, so it is covered.” The Kovel doctrine can extend privilege to a third party (classically an accountant) retained by counsel to help render legal advice, on the theory that the third party functions as part of the legal team. Whether a conversational AI qualifies is an open question. No court has answered it. The version of the argument most likely to fail is the common one: an outside lawyer using a consumer AI account, with no confidentiality agreement with the provider, no retention controls, and terms of service that allow the input to be used for training. Assume the protection does not reach the tool until counsel has explicitly built the structure around it. A business-tier account under the firm’s contract. Privilege-preserving retention settings. Documented treatment of the AI as part of the legal engagement. Anything short of that is a waiver argument the other side will make. [M: Kovel doesn’t reach a chatbot.]
“The chat was on my personal account, so the company does not have to produce it.” Possession, custody, or control is the test. Courts repeatedly hold that information on personal accounts used for work purposes falls within the employer’s control for discovery purposes. The personal account is not a sanctuary. It is a second production source. [S: the personal account is on the production list.]
What to do
Four things. Do them this quarter.
- Write a specific policy on sensitive use cases. Acceptable-use policies that say “do not put confidential information in AI” are too vague to steer behavior. Name the categories managers actually run into (performance management, investigations, terminations, protected-class facts, regulatory responses, M&A workforce modeling) and say plainly that those categories run through counsel, not a chat window. A policy managers can remember is a policy managers will follow.
- Build a counsel-routed workflow for the hard cases. If a manager needs an AI thought partner for a termination, an investigation, or a disparate-impact question, the path should be: counsel engages, counsel runs the tool under a privilege-preserving structure, counsel returns legal advice. The AI becomes an extension of the legal function, not a shadow one. The same Kovel-style logic that has governed forensic accountants for forty years applies here with almost no translation.
- Turn on enterprise logging, retention controls, and SSO. Tell people they exist. Business-tier accounts with audit logs and admin visibility are not just a security upgrade. They are a deterrence mechanism. Managers behave differently when they know the prompt is logged against their employee ID. That behavior change is the control you actually need. Pair it with honest training: the chat is not ephemeral, the transcript is producible, your name is on it.
- Train managers on what “discoverable” actually means. Most managers have never been deposed. They have never had a plaintiff’s lawyer read their words back to them under oath. They do not intuit that a Sunday-night chat survives. A 90-minute training with real (anonymized) examples of AI transcripts used as exhibits moves the needle more than any policy document. The goal is not to scare. The goal is to make the invisible visible.2
Get counsel before the next hard conversation
This is not an IT problem and it is not an HR problem. It is a litigation-exposure problem that happens to run through software your employees are already using. The legal questions (privilege posture, preservation obligations, ESI protocols, litigation-hold scope) do not solve themselves in a policy template.
Before your next performance-management cycle, your next internal investigation, your next RIF, or your next regulatory response, have counsel set the ground rules for AI use inside those workstreams. Settle it once, before the subpoena arrives.
A closing thought
Every prompt is a document.
Every document is discoverable.
Every AI prompt is a discoverable document.
Every brainstorm is an exhibit.
Your manager did not know. Now you do.
Postscript: what the transcript looks like from the other side
Imagine the opening ten minutes of a deposition.
Plaintiff’s counsel slides a printed chat transcript across the table. The witness, your SVP, your HR lead, your GC, recognizes it immediately. Their own name is in the account metadata at the top of the page. The timestamp is two months before the termination, investigation, filing, whatever it is.
The questions write themselves.
“Did you type these words?”
“Did you paste this Slack export into this tool?”
“When you wrote 'manage him out,' what did you mean?”
“Were you aware, at the time you typed this, that this conversation was being stored?”
“Did you consult counsel before typing this?”
“Did you understand that this system is not your attorney?”
There is no good answer to any of them. There is only the witness, the transcript, and a jury that understands timestamps perfectly well.
The chat did not feel like a document when they typed it.
It is one now.
- The chat window has the visual grammar of iMessage. Cursor blinks. Reply arrives in flashes. Feels private. Isn’t. Most managers have never been told. ↩
- Counsel-routed AI is the structure that travels in 2026. Business-tier accounts under the firm’s contract, retention controls calibrated to litigation hold, documented treatment of the tool as part of the legal engagement, training that moves managers off the chat window for the sensitive categories. ↩