Your principal engineer (the one who rewrote your matching algorithm in late 2023, architected the billing service in 2024, and shipped the inference pipeline that now runs a third of your product) resigns on a Thursday. She gives two weeks. On her last day, she returns her laptop, surrenders her badge, signs the exit letter acknowledging her post-termination confidentiality obligations, and walks out with nothing. No thumb drive. No repo clone. No printed pages. Not a single file.

She does not need to take anything.

For the last three years, every line she wrote (every comment, every proprietary API, every internal helper, every algorithm, every failed experiment, every debug log she pasted into the chat panel asking for help) went through Copilot, Cursor, and Claude Code. She used the tools as she was supposed to. The company paid for them. IT approved them. Every pull request was reviewed. Every commit was clean.

She starts at your fiercest competitor on Monday. The same tools are installed on her new laptop by the end of her first day. She opens a new file, starts typing a function signature, and the tool autocompletes with an approach that looks extremely familiar.

You do not know what, exactly, has moved. Neither does she. Neither does the vendor. That is the problem. [S: this is not a fun problem.]

This is the story of every company in 2026 that has an engineering team and an AI coding policy that says, in effect, “yes, go ahead.” It is happening right now at yours.

What the tool is actually doing

An AI coding assistant is not autocomplete. It is a pipeline with four legally distinct flows and a commercial product built on top of them.

Input. The tool reads your repository. It reads the open file. It reads the recent files. It reads the diffs you just made. It reads the comments you typed asking for help. On most configurations it also reads, at varying depths, the rest of the codebase it has access to: your private packages, your internal APIs, your architectural patterns, your naming conventions, your business logic. That material leaves the editor and goes somewhere.

Transit and retention. The material is transmitted to the vendor’s infrastructure and, depending on the plan and the settings, retained for some period. Consumer and individual plans retain inputs for abuse detection and, in some cases, for product improvement. Business and enterprise plans typically offer zero retention and no training, if you signed the right contract, turned on the right setting, and enforced it at the account level. Most companies have not confirmed all three.

Training and improvement. On some plans, inputs are used to improve the model, retrieval indices, or ranking. On enterprise plans, inputs are typically excluded from training. The line between “improving ranking on your own workspace” and “improving the shared product” is frequently blurred in the documentation and almost never audited by the customer.

Output. The tool suggests code. The suggestion is generated by a model trained on a very large library of public and licensed code. Sometimes, demonstrably, the suggestion reproduces code from that library verbatim. GPL. AGPL. MIT. Apache. Unlicensed. Sometimes the suggestion reproduces patterns that resemble, in substance, material the model has seen in other customers’ private contexts through earlier training rounds or retrieval.

Every one of those four flows has legal consequences. Most engineering leaders have evaluated the first and the last. Almost none have read the contract for the middle two. [M: to echo Sam, this sandwich is just full of fun.]

The four problems nobody named at the rollout meeting

Your trade secrets took a trip. Under the Defend Trade Secrets Act, a trade secret must be subject to “reasonable measures” to maintain its secrecy. Your ranking algorithm, your pricing model, your inference pipeline, your proprietary database schema (each of which might qualify as a trade secret) now sits, in some form, on the vendor’s servers. If your account is not a business or enterprise tier with zero retention and no training, the “reasonable measures” story is harder to tell. If your enterprise settings exist but were not enforced at the developer level (personal accounts, consumer-tier plugins, shadow IT), the story is harder still. Plaintiffs will ask whether you knew, whether you monitored, whether you enforced. Your logs are the answer.

The vendor indemnity does not cover you. Read it. The vendor indemnifies you for third-party infringement claims arising from their model, specifically, from outputs substantially similar to copyrighted material in the training set, subject to your compliance with the vendor’s configuration requirements. The indemnity does not cover your own engineers’ use of those outputs in ways that expose you to other liabilities. A suggestion that lifts GPL-licensed code into your proprietary product is a license-compliance problem, not a copyright-infringement problem, and is typically excluded from the indemnity or capped at a number that is not useful. Read the carve-outs. The indemnity is narrower than the brochure suggests, and the carve-outs are where the real exposure lives.

License laundering is now a real vector. When the tool suggests a block of code that is, in substance, code that was scraped from a GPL-licensed repository, and your engineer accepts the suggestion and commits it, your codebase now contains material that triggers GPL’s copyleft obligations. The obligations attach regardless of whether anyone at your company knew. The vendor argues their output is not a derivative work. The plaintiff argues otherwise. The courts have begun to take the plaintiff’s side on some variants and the defendant’s side on others, and the doctrine is genuinely unsettled. The point is not that you will lose. The point is that you will be the one forced to litigate the question, at your expense, on your facts, in public.1

Your NDA does not cover it. The NDA every employee signs prohibits disclosure of confidential information to third parties. Your engineers’ daily use of a tool that transmits your confidential information to a third party, under a consumer-tier or misconfigured enterprise plan, is a structural disclosure. The NDA was not drafted for it. The acceptable-use policy was not drafted for it either. When the engineer goes to a competitor, the question is not whether she took anything. It is whether the vendor’s product, to which she still has access on her personal account or under her new employer’s account, still carries any imprint of what she did on your systems. Nobody can fully answer that question, including the vendor.

What changed

AI coding tools went from novelty to default faster than any category in the last fifteen years. In 2023, the leading engineering teams piloted them. In 2024, they rolled them out. In 2025, they moved production-critical work into them. In 2026, “not using AI in your editor” is an indicator of which engineers you are about to lose, not a discipline you are enforcing.

The contracts, the acceptable-use policies, the NDAs, the invention assignment agreements, the open-source compliance processes, the trade secret protocols. All of them were drafted before any of this. A few of them were updated lightly in 2024 with a paragraph on “generative AI tools.” Almost none of them are current with what is actually happening in the editor on an engineer’s laptop at 9:43 p.m. on a Wednesday.

There is also a capability that did not exist before: the tool is now a persistent memory of the engineer’s working life. Her entire coding history (every problem she solved, every pattern she reused, every internal API she learned) is indexed in a system the vendor controls, partially reflected in a model whose weights are shared with future users, and partially attached to an account that travels with her. She does not have to exfiltrate anything to take it with her. It left your network in real time, for three years, by the terms you already approved.

What this means for you

Three things worth naming plainly.

The old “the engineer did not take anything” defense is obsolete. Your trade secret misappropriation case used to rest on showing what she physically or digitally carried out. That evidence is now, in most cases, irrelevant. The material moved in real time, through approved channels, over months. The departing engineer’s laptop is clean, her email archive is empty, and the substance of your confidential system is already at rest on a vendor’s servers. And in her chat history. And in her new employer’s instance of the same vendor. [S: clean laptop, dirty cloud.]

The plaintiff’s bar will use your own logs against you. The vendor’s admin console tells you what the engineer pasted, what she queried, what she accepted. If she pasted your pricing logic into a prompt three months before her departure, you will know, and so will the other side on discovery. If she did not, you still have a problem, because the tool’s retrieval layer ran across your private repositories anyway, and nobody paged the log. The ambiguity is the issue.

The “reasonable measures” standard is being rewritten around your practices. Courts will increasingly ask what controls a reasonable technology company puts on AI coding tool usage in 2026. Enterprise tier, SSO enforcement, zero-retention, training-excluded. Those are becoming the baseline. No admin console, no use policy, no DLP, no offboarding review. Those are becoming evidence of failure to protect. The line is moving every quarter. Your defense is what you did, not what you meant to do. [M: this is evolving every quarter.]

If your codebase has been on a public repository at any point, it has likely been included in at least one AI training run. The licensing implications follow.

Talk to a Talairis attorney →

What to do

Four things, in order.

  1. Move every engineer onto a properly configured enterprise tier, with SSO, zero data retention, training excluded, and admin logging turned on. Then verify. Do not rely on the vendor’s brochure. Pull the contract. Confirm the flags are set on your tenant. Confirm every user is in the tenant. Confirm personal accounts are blocked at the network layer. Do this this quarter, not next.
  2. Rewrite the acceptable-use policy to match reality, and train engineers on it. What prompts are acceptable. What material should never be pasted into a prompt (customer PII, credentials, specific trade secrets). What to do when a suggestion looks suspiciously like it came from somewhere specific. What happens at offboarding. Have it signed, countersigned, acknowledged, refreshed annually. An unenforced policy is worse than no policy. An enforced policy is the reasonable-measures artifact you will point to in court.
  3. Build open-source license hygiene around AI-generated code. A pipeline that scans for likely-copied blocks. A pre-commit check that flags obvious GPL fingerprints. A legal review for any suggestion that looks structurally distinct. A public record of your process. The goal is not perfection. The goal is to be able to show you exercised reasonable care when the plaintiff’s letter arrives.
  4. Update the departure protocol. On separation, revoke the engineer’s access to every AI tool the same day you revoke her VPN. Preserve her logs. Review the last 90 days of activity for unusual patterns. Remind her in writing that her post-termination confidentiality obligations extend to material that was processed through AI tools during her employment, and that she may not use any personal AI account that retains history from her time at the company. The reminder may not stop anything. It will make the NDA enforceable when you need it to be.2

Get counsel before the next deployment, or the next departure

The contract you signed with your AI coding vendor is probably out of date. The acceptable-use policy you handed your engineers is definitely out of date. The departing-engineer checklist your HR team uses is almost certainly not addressing any of this.

Before the next senior engineer’s last day. Before the next tool rollout. Before the next vendor renewal. Have counsel review the actual data flows, the actual contract terms, the actual policy documents, the actual enforcement. The questions are unsettled doctrinally, but the answers, for your specific stack, are knowable. Knowable now, before the departure or the complaint forces the question into litigation on facts you have not prepared.

The companies that do this in 2026 will keep their trade secrets. The ones that do not will find out what they lost from the job posting their competitor runs six months later.

A closing thought

Who trained on your code?

Some vendor, and some model, and some retrieval index, and some engineer whose next employer gives her access to the same tools on day one. You do not know exactly which. Neither do they. Neither does anyone.

That is not a reason to stop using the tools. The tools are a permanent feature of how software gets built from here. It is a reason to treat them as the serious legal surface they are. Contract, policy, training, controls. The kind of thing a serious legal surface requires.

Your engineers are going to use AI to write your code. The only question is whether the infrastructure around that use is drafted for 2026 or for the world that ended three years ago.

Footnotes
  1. Three doctrines converge at the editor. DTSA reasonable measures, GPL copyleft, and the standard-form employee NDA. Each was drafted before any of this. The vendor contract sits on top of all three. The engineer’s prompt sits on top of the vendor contract. That stack is what the next generation of trade secret cases will be litigated on. — Matt
  2. The cleanest example of the problem: an engineer pastes proprietary algorithm details into a prompt asking “how can I make this faster.” She gets a useful answer. The transcript is in vendor logs. The prompt content is, depending on plan tier, in the vendor’s training queue. The engineer leaves six months later. The new employer uses the same vendor. What happens when the engineer queries that vendor for the same response? — Sam