It is 11:13 a.m. on Tuesday, March 31, 2026. Your senior engineer drops a link in the `#engineering` Slack with the comment, “this is wild.” It is a GitHub mirror of `@anthropic-ai/claude-code`. Roughly 512,000 lines of unobfuscated TypeScript across 1,906 files, including 44 unreleased feature flags. Anthropic accidentally bundled an internal source map into a public npm release. Within 24 hours the repository is forked 41,500 times. It is the fastest GitHub project in history to hit 50,000 stars.
By 11:14 a.m. two of your engineers have clicked the link. By 12:30 p.m. one of them has it cloned to his laptop “to look at how they handle tool calls.” By the end of the day a third has read enough of it to mention an architectural detail in a design review. [M: access establishes. Use is downstream.]
Nobody on your team did anything malicious. They did the thing engineers do. They read code that was sitting in front of them. And in doing it they handed you a legal problem that is harder to unwind than the breach itself.
This is the story of every engineering team in 2026. It is happening right now at yours.
Why access is the whole problem
Anthropic’s framing is that this was a “release packaging issue caused by human error, not a security breach.” That is a defensible statement about Anthropic. It is the wrong statement for you to repeat.
Your exposure is not what Anthropic exposed. Your exposure is what your engineers read. Trade secret contamination, derivative work risk, reverse-engineering claims, and litigation discovery all turn on access. Not on intent. Not on use. Not on whether you copied a single line. The moment your engineer scrolls through the file, the access is established. It does not unestablish.
The four specific failures that flow from access
The clean room is contaminated. If your team is building anything that competes with, wraps, or interoperates with Claude Code (and most engineering teams in 2026 are), your “clean room” defense to a derivative work claim depends on the engineers writing your code never having seen the original. Once they have read the leak, your clean room is no longer clean. This is not theoretical. It is the same fault line that runs through every semiconductor and operating system case for the last forty years. The remedy in those cases was the same remedy you need now: a documented wall between people who have seen the source and people who write the product. [M: the documented wall is the only clean room.]
The TOS reverse-engineering trap. Anthropic’s existing Terms of Service prohibit reverse engineering of its products. A leaked source map is not a license to read. A motivated plaintiff, including Anthropic itself, can argue that reading the unobfuscated internals constitutes the kind of derivation the TOS already prohibits, regardless of how the access happened. Whether that argument wins is unsettled. The cost of having to defend against it is not. [M: defending the argument costs as much as losing it.]
The trade secret defense weakens both ways. Under the DTSA and state analogs, your trade secret protection requires “reasonable measures” to maintain secrecy. The same standard applies to Anthropic in reverse: the more engineers read the leak, the less Anthropic can enforce it as a trade secret. That sounds like good news. It is not. It means a court adjudicating a future dispute will look at exactly which of your engineers accessed the leak, when, on what device, and what they did afterward. The factual record you build now is the record you will live with later.
The discovery interrogatory is already drafted. In the OpenAI MDL, Judge Sidney Stein in January ordered production of 20 million ChatGPT logs. In Bartz v. Anthropic, the court ruled that storing pirated copies is not fair use, and Anthropic settled for $1.5 billion. The next round of AI litigation will include a Rule 26(a) interrogatory that reads, in substance, “identify each employee who downloaded, viewed, or accessed any portion of the leaked Claude Code repository, the device used, and the date of access.” The cleanest answer is the answer you can document. The cleanest answer requires that you have already told your engineers, in writing, not to look at it.1
The cultural problem you are actually solving
The leak is interesting. That is the entire issue. It is not boring source code. It is the architecture of one of the most-used AI products in the world, with unreleased feature flags labeled `KAIROS`, `BUDDY`, and `Agent Swarms`. Of course your engineers want to read it. Reading other people’s code is most of how engineers learn. [S: KAIROS, BUDDY, Agent Swarms. Of course they want to read it.]
You are not arguing with their curiosity. You are telling them that this particular code is radioactive for reasons that have nothing to do with whether it is good. The directive has to be in writing. It has to be specific. And it has to be sent before, not after, the next person on your team forwards the link.
What to do
Four things. This week.
- Send a written directive to all engineers. Name the leak. Name the date. State plainly: do not download, view, fork, mirror, or describe the contents of the leaked Claude Code repository or any of its derivatives. Save the email. This single document is your trade secret defense and your discovery answer at the same time.
- Put a wall around any AI tooling work. If your team builds anything adjacent to coding agents, identify the engineers on that work and confirm in writing that none of them has accessed the leak. If any have, rotate them off. Document the rotation.
- Block the known mirrors at the network layer. The forks live on dozens of mirrors and a decentralized git host. You will not catch all of them. You will catch enough to make the deliberate act required to access one a defensible record point.
- Update your engineering onboarding. New hires need to receive the same directive. So do contractors. The piece of paper is the asset.2
If your AI vendor experiences a data incident this quarter, the disclosure obligations and customer notifications fall on you, not them.
Talk to a Talairis attorney →Get counsel before your next all-hands
This is not a ticket your engineering manager should triage on a Friday. Trade secret protection, derivative work exposure, reverse-engineering claims, and discovery posture all turn on the same factual record. Who accessed what, when, and what you did about it. Have a lawyer read your directive, your acceptable-use policy, and your engineering onboarding language before any of your engineers are asked these questions in deposition, in arbitration, or on a customer questionnaire.
A closing thought
Anthropic shipped a packaging mistake.
Your engineer found it interesting.
He clicked the link.
And the legal record you wish existed (the directive, the timestamps, the wall around the relevant team) is the one you have to build before he clicks the next one.
- The infrastructure of AI litigation is being built in real time. Bartz v. Anthropic established that storing pirated copies is not fair use. The OpenAI MDL is producing 20 million chat logs. Access by your engineers is the first fact a competent plaintiff will plead, and access logs are the first thing a competent defense will need to produce. ↩
- Engineers find these leaks the same way they find every other interesting thing. Someone in a Slack channel posts a link with the word “wild” in the message. Your written directive is the only thing standing between that Slack message and a deposition exhibit. ↩