Your CMO has been the face of the company for six years. She has filmed 82 webinars, sat for 40 podcast interviews, delivered 12 keynotes, recorded a thousand demo clips, and hosted a weekly LinkedIn video with 11,000 followers. Every asset sits in Dropbox, labeled, tagged, indexed. Your marketing team built that library on purpose.

She resigns and starts a company in an adjacent space.

Three weeks later, a colleague forwards her a LinkedIn post she did not write, from the company page, featuring a 78-second video of her endorsing the new Q3 product roadmap. It is recognizably her face, her voice, her cadence. She never stood in front of that camera. She never said those words. The post has 6,000 likes.

She calls her lawyer.

Your top CS manager walks out the same Friday. Five years on the team, the highest NPS in the book. Ten calls a week, 45 minutes each, 250 working days a year. Every one of them recorded on Gong. Fifteen hundred hours of voice. Every objection. Every close. Every empathy beat. Every save. Labeled, searchable, tagged by sentiment.

She is not worried about her likeness. She never appeared in a keynote. Her face is not on the website. She spent five years on the phone.

Six months later, a friend still at the company tells her the team is hitting quota with half the headcount, and a virtual avatar that does not look or sound like her, but really is her.

One form governs both

Any lawyer talking to the CMO and the CS manager asks the same question first: what did you sign on your first day?

The answer is the same. A half-page document labeled “Photo and Media Release” or “Publicity Consent.” It lived between the I-9 and the direct-deposit form. None of them remember signing it.1 The operative language, with small variations:

Employee grants Company the perpetual, irrevocable, royalty-free, worldwide right and license to use, reproduce, modify, edit, create derivative works from, distribute, publicly display, and publicly perform Employee’s name, likeness, image, photograph, voice, and biographical information, in all media now known or hereafter developed, for any purpose related to Company’s business, without further consent, compensation, approval, or attribution.

Read that clause with generative AI in one hand and a 2015 drafting manual in the other. It does a lot more work in 2026 than anyone thought it did in 2015. [M: every verb in that clause is doing new work.]

What a company might want to do

Start with the obvious. Reproduce you from owned footage. Make derivative works. Edit, modify, and re-sequence. Keep using the asset after you are gone. Every one of those is plausibly granted by the release. The company’s argument is cleverer than the counter.

Then the ambitious plays. Train a model on 1,500 hours of your Gong calls. Fine-tune against your best closes. Patch new utterances into a departed employee’s existing training video. Generate a synthetic spokesperson calibrated to a version of you that never existed.

Then the commercial one. Build an avatar of your role, trained on you and the rest of your team, and let it carry the workload at license cost, not fully loaded cost. Keep scaling it until the function gets cheaper than the humans who used to do it.

The release was drafted for the first set. The second set is in the gray. The third set is the commercial thesis of a lot of companies in 2026.

What might constrain them

Two things, mostly.

Right of publicity at the identifiability threshold. Right of publicity protects you when a reasonable observer would identify the output as you. A sound-alike vocalist was Bette Midler enough (Midler v. Ford). A robot in a blonde wig was Vanna White enough (White v. Samsung). Animatronic bar-stool characters evocative of the Cheers cast were enough (Wendt v. Host International).2 The test has never required a perfect copy. It has required that a reasonable observer can tell. Above that line, no release saves the company. Below it, the employee has no claim.

Statute. A small patchwork of generation-specific laws. Tennessee’s ELVIS Act (2024) created the first state-level private right of action for voice cloning. California’s AB 2602 (2024) voided blanket prospective consents for performers. The federal NO FAKES Act is pending. The SAG-AFTRA 2023 settlement gave actors real protection. The CMO and the CS manager got the onboarding release.

And a rising third: truth-in-advertising. The FTC’s Section 5 reaches deceptive practices. The 2023 endorsement guides reach synthetic endorsements. The FCC’s February 2024 declaratory ruling treats voice-cloned robocalls as TCPA violations. If a synthetic agent interacts with a customer and the customer reasonably believes they are speaking with a human, the interaction is deceptive, regardless of whether the synthetic is identifiable as any real person.

That is the constraint stack. That is the entire map.

Why the constraint stack misses the point

All three protections (publicity, statute, consumer protection) have the same blind spot. They protect the likeness. They protect the impersonation. They protect the customer’s belief about whether they are talking to a human.

They do not protect the patterns.

Your grasp of the subject matter. Your deep product knowledge. Your objection handling. Your instinct for when to navigate a conversation a different direction and when to upsell. That is what made you good at the job. It is trainable. It is not likeness. Right of publicity does not reach it. No statute targets it. Truth-in-advertising addresses the disclosure, not the training.

The company that wants to stay on the right side of every constraint trains on you, dials identifiability down until no reasonable observer would call the output you, puts a different face on the avatar and a different name on the screen, discloses that the agent is synthetic, and proceeds.

Every box is checked. [S: game, set, match.]

If your name, image, or voice has been ingested by a third-party AI system, the ownership question is no longer hypothetical.

Talk to a Talairis attorney →

Likeness was a twentieth-century idea

The doctrine assumes a copy trying to be you. The closer the copy, the more the violation bites. The law was built to stop someone from emulating your identity.

It does not work now.

When the synthetic can exceed you, the company’s interest inverts. It does not want to resemble you. Identification is the liability. De-identification is the feature.

The thing being stripped (face, voice, name) is what the law protects. What is left underneath is the work product: your patterns, your cadence, your instinct. Everything the company actually wants.

The law protects the surface. AI moves the value underneath that. [M: the doctrine grades the wrong feature.]

They will make a better version of you

The model trained on five years of your Gong calls does not have a bad day. It does not have the flu. It does not have a Friday afternoon where the fourth call runs 20 minutes long and the fifth call gets the distracted version. It has the mean of your best. Statistically, you at your peak.

And it is not just trained on you. The same library holds 1,500 hours from the next CS manager. The top objection-handler from renewals. The save specialist who carried the floor’s highest retention number. The model trains against the top 10 percent of every rep, blended together.

Select the calls that converted. Select the ones where the customer laughed. Train against those. Tune the voice two degrees off no one in particular. Change the name. Give the avatar a face that is no real human at all.

What you have built is an amalgamation. Not a copy of you. Not a copy of any one rep. The peak of everyone on the team, woven together, calibrated to a role that never existed.

A reasonable observer cannot identify the output as you, or as any of them. No court will say it is. The release did not have to cover it, because the release covered the training data, which the company already owned under work-for-hire, and the output has been de-identified by construction.

The job goes away

Fifteen hundred hours of audio on the departed CS manager. Thirty thousand hours on the current CS team of 18. Eighty thousand hours on the AE team of 36. All captured in the ordinary course of business, on a platform the company pays for, with consent nominally collected. All labeled by outcome, by stage, by product.

The model trained on that library is not the CS manager. It is not the AE team. It is what is left when you average across all of them, weight the weights toward conversions, trim the silences, and remove the bad days.

It does the first-touch follow-up. The 30-day check-in. The expansion conversation. The save call. The renewal. In 20 percent of interactions it does better than the median human did. In 80 percent it does roughly the same, for a fraction of the cost.

The roles that disappear first are the ones with the largest recorded-call library. SDR, AE, CSM, AM, BDR, inside sales, a chunk of L&D, a chunk of technical support, a chunk of first-line HR. Not because a departed employee sued. Because somebody looked at the spreadsheet (license cost on the synthetic, fully loaded cost on a human) and the spreadsheet closed the argument. [S: spreadsheets usually win.]

Every piece of recorded content the company retains on current employees is training data for a role the company may eliminate.

Including yours.

They are going to give your job to an unidentifiable, synthetic version of you that is better than you. It does not need sleep. It does not object to 3 a.m. APAC calls. It does not ask for a raise. It does not quit.

A closing thought

Who owns your likeness?

You do. That part is clear.

Who owns... you?

Not the photograph. Not the video. Not the face. Those are the parts the law protects, and the law is keeping up as well as it is going to.

The rest of you, your grasp of the subject, your product knowledge, your objection handling, your instinct for the upsell, the reason the customer asked for you by name, is sitting on a company server, labeled by outcome, waiting for the next training run.

The release the CMO signed and the CS manager signed was drafted for a world in which the likeness was the valuable part. In 2026, the likeness is the distraction. The patterns are the asset.

Your face, voice, and name are still yours.

The rest of you is up for grabs.

Footnotes
  1. Matt and I had a lot of spirited debate on this subject. It doesn’t just sound wildly dystopian. It is wildly dystopian. It is also the law as it exists today. — Sam
  2. The publicity right was built to stop one impersonation by another human. Midler, White, Wendt are all single-actor cases. The 2026 version is a model that learned from a thousand examples and emits something that resembles none of them precisely. The doctrine has not adjusted. The patterns it doesn’t reach are the patterns the company actually wants. — Matt