It is 10:47 p.m. Your most diligent product manager is working late. She needs to condense a 40-page customer research transcript into a three-slide summary for a leadership meeting at 9:00 the next morning. She opens claude.ai in a browser tab, signs into her personal account, pastes in the full transcript, and asks for the summary. Thirty seconds later she has it. She pastes the output into her deck, closes the tab, and goes to bed.
The transcript included your customer’s internal procurement process, candid feedback about your pricing, a commitment to expand into two new business units, and the name of the competitor they most recently evaluated against you. It is now on a server she does not own, under terms of service she did not read, tied to a personal email address.
You are paying her $180,000 a year. You spent $3,400 on her laptop. You spend $260 a month on her SaaS licenses. You will spend another $40,000 over her tenure replacing her if she leaves. You did not spend $30 a month for a business-tier AI account for her. So she used her personal one. [S: $30 a month. Roughly the cost of a Spotify family account.]
This is the story of every company in 2026.1 It is happening right now at yours.
What just happened
Three things worth naming.
Your data left your network. Her personal Claude account is not subject to your DPA, your MSA, your security controls, your acceptable-use policy, your retention schedule, or your SSO. The transcript is in a system that sits outside every boundary your security team is paid to defend.
Your customer’s data left your network. Your customer sent you that transcript under a confidentiality clause in your MSA. You just transferred it to a third-party processor with whom you have no processor agreement. You are, as a matter of contract, in breach. They do not know yet.
Your reasonable-measures defense just got weaker. Trade secret protection under the DTSA and state analogs requires the owner to take “reasonable measures” to maintain secrecy. Allowing employees to paste confidential material into personal AI accounts, through an unenforced policy, is increasingly hard to call reasonable.
And remember, this is a good employee. This is someone trying to do a good job at 10:47 p.m. on a Wednesday. [S: it’s always your best people.]
What your employees are actually doing
Every employee at your company with any exposure to AI is using it for work. The ones who claim not to be, are.
They are drafting emails. Summarizing meetings. Rewriting performance reviews. Cleaning up Slack messages. Analyzing customer conversations. Drafting legal responses. Reviewing contracts. Generating code. Writing SQL. Debugging pipelines. Condensing research. Translating feedback. Preparing board slides. Outlining strategy documents.
If you have not given them a business-tier account, they are doing all of it on their personal accounts. The alternative is not “they will stop using AI.” That ship sailed in 2023. The alternative is “they will use AI on infrastructure you do not control, under terms that do not protect you.”
The personal-account problem
Four specific gaps between a personal account and a business-tier one.
Training and retention. Free and consumer AI accounts are not governed by the same terms as business-tier accounts. Some platforms train on free-tier inputs. Some retain inputs for abuse detection for 30 days, some for longer. Business-tier subscriptions come with explicit contractual no-training commitments, zero-retention options, and customer-owned data. Personal accounts do not.
No DPA, no processor agreement, no audit. An enterprise subscription is a B2B contract with the vendor: security commitments, indemnification, audit rights, breach notice obligations. A personal account is a click-through consumer EULA between the vendor and your employee. If your customer’s data is on the wrong side of that line, you own the breach.
No SSO, no admin, no visibility. You cannot revoke a departing employee’s access to their personal Claude or ChatGPT account. You cannot see what they pasted in. You cannot enforce an acceptable-use policy. You cannot meet a DSAR obligation for data you cannot inventory.
The account follows the employee out the door. When your PM leaves for a competitor in 18 months, her personal Claude account (with every summary she ever generated from your customer calls and every internal doc she ever pasted in) leaves with her. She does not have to do anything actively wrong for that to be a material loss.
The numbers
As of 2026, ballpark pricing for the platforms your employees are actually using:
- Claude Team: around $30 per user per month - Claude Enterprise: negotiated, typically $60+ per user per month - ChatGPT Team: around $30 per user per month - ChatGPT Enterprise: negotiated, typically $60+ per user per month
Take the conservative end. Thirty dollars per user per month. Three hundred and sixty dollars per employee per year.
Now compare that to what you already spend to equip the same employee.
- Laptop: $2,500 to $4,000 per employee, refreshed every three years. - SaaS licenses (Slack, Notion, GitHub, Linear, 1Password, Figma, Zoom, DocuSign, and on and on): $2,000 to $4,000 per employee per year. - Office space, if you still have one: $5,000 to $15,000 per employee per year. - Recruiting cost per hire: $20,000 to $50,000 fully loaded. - Benefits and payroll tax: $20,000 to $40,000 per employee per year.
You are refusing to spend $360 on the tool they use the most, every day, for everything. That is not cost discipline. It is negligence dressed up as cost discipline. [S: the laptop refresh got more thought.]
If your company hasn’t audited its AI tool usage in the last six months, the data that has already left the building is more than you think.
Talk to a Talairis attorney →What actually breaks if you do not fix this
The material legal exposures, not the theoretical ones.
Customer confidentiality breaches. Every MSA has a confidentiality clause. Every one of those clauses is breached the moment your employees move customer data into a system with which you have no agreement. The damages math on that in a renewal negotiation, an audit, or a lawsuit is bad.
Privilege waiver. Anyone in legal, HR, or the executive team putting privileged communications into a personal AI account is arguably waiving privilege on the subject matter. The consumer platform is a third party. Disclosure to a third party outside the privilege perimeter is waiver. [M: privilege travels poorly.]
Trade secret erosion. “Reasonable measures” is a factual standard. Courts and plaintiffs will look at whether you provided a business-tier tool, whether you had an enforceable policy, and whether you trained employees on it. “We had a policy but did not fund the alternative” is a hard story to tell in front of a judge.
Regulatory exposure under privacy law. GDPR, CCPA, and their cousins require you to name your processors, sign DPAs, and manage cross-border transfers. Personal AI accounts are not named processors. You are out of compliance the moment customer PII touches one.
What to do
Three things. Do them this week.
- Buy business-tier subscriptions for every employee who touches company data. Claude Team, ChatGPT Team, or both. Budget $30 to $60 per user per month. Stop having the conversation about which platform is better. Provide both and let employees pick the tool for the job.
- Turn off personal AI use for work with a policy employees can actually follow. Block the consumer domains on the corporate network, route all access through SSO, and state in the acceptable-use policy that any work-related AI use must go through the business-tier account. Enforce it. An unenforced policy is evidence against you in a trade secret case, not for you.
- Get counsel to rewrite the AI section of your employee handbook and your customer agreements. Both need to reflect how AI is actually being used in 2026. Your 2019 confidentiality provisions do not. Your 2019 acceptable-use policy does not. Your 2019 customer MSA does not.2
Get counsel before the next renewal cycle
This is not an IT problem. It is a legal and contractual exposure problem that happens to involve software licensing. The same way you would not let procurement sign an MSA with a data processor without a lawyer reviewing it, you should not let procurement size an AI rollout without legal input.
Your customer contracts, your privacy policy, your vendor DPAs, your employee agreements, your trade secret protections, and your cyber insurance policies all intersect here. Fix them together, once, correctly, before something material leaks and you are reading about it in a complaint.
A closing thought
AI data leakage is destroying your company right now. You know it is.
Thirty dollars per user per month is not the expense. Not paying it is.
- Every company has a 10:47 p.m. story. The PM who used personal Claude. The senior counsel who pasted the contract into ChatGPT to summarize. The salesperson who fed last quarter’s pipeline into Gemini on her iPad. The companies without a story are the ones that haven’t asked. ↩
- The fix is contracts and policy, not just a SaaS purchase. The customer MSA, the privacy policy, the vendor DPA, the employee handbook, the acceptable-use policy, and the cyber policy all intersect at this leak. ↩