Open your company’s privacy policy in one tab. Open a network inspector on your homepage in another. Let the page fully load and count the third-party domains your site called. Count the cookies set. Count the pixels that fired before anyone consented to anything.
Now search your privacy policy for the words “processor,” “service provider,” “vendor,” “partner,” and “third party.” Your policy, last materially edited in September 2022 by an associate at your old firm, names four of them. Your network tab just logged 23. Among them: a session replay tool your CX team stood up during a 2023 onboarding push; a pixel your growth team added to A/B test a new landing page last April; an AI support summarizer your COO signed a Terms of Service for over lunch in March; a chatbot vendor whose DPA was supposed to be countersigned in Q2 and never was; an attribution tool that was quietly added to your Google Tag Manager by a marketing contractor whose email no longer resolves. [S: every site has its 23.]
Your policy says you do not sell personal information. Your growth team has been uploading customer email hashes to Meta for lookalike audiences since 2023.
Your policy says you retain data “for as long as necessary.” Your Snowflake account has a table called `events_raw_2019`. [S: events_raw_2019 is in every Snowflake account.]
Your policy says you use data “to improve our services.” Your engineering team routed customer support transcripts into a fine-tuning pipeline last quarter. Nobody asked your privacy team.
This is the story of every company in 2026. It is happening right now at yours.
What a privacy policy actually is
A privacy policy is not a product document. It is not a marketing asset. It is a legal representation made to the public and to regulators about what you do with personal information. Under Section 5 of the FTC Act, a material misrepresentation in that policy is an unfair or deceptive practice, enforceable by the FTC, by state attorneys general under the little-FTC acts, and under the UDAP statutes in every state. Under the CCPA/CPRA, it is a disclosure you are required to make accurately, with specified elements, updated at least annually. Under GDPR Article 13 and 14, it is how you discharge your transparency obligations to every data subject in the EU and UK. Under state comprehensive privacy laws (California, Colorado, Connecticut, Virginia, Utah, Texas, Oregon, and the eight others that have come online since), it is both a mandatory disclosure and a self-imposed contract with your users.1
The policy is the claim. Every other document (your DPA inventory, your vendor list, your data map, your consent records) is either the proof or the disproof.
If you are not sure which, you already know.
What your policy was designed to say
Your policy was written, first, to satisfy CCPA’s original 2020 disclosure requirements. It listed categories of personal information collected, purposes of processing, categories of third parties, and consumer rights. Good. It was updated once for GDPR, once for CPRA, and maybe once more when California Privacy Protection Agency regulations took effect. Competent. It was probably structured as a layered notice, with a short summary and a long detail section. Professional.
None of that is the problem.
The problem is that the policy described the state of your data flows at the moment it was drafted, and your data flows have changed every week since. New pixels. New processors. New AI tools. New internal pipelines. New vendors your sales team bought without routing through procurement. New features your product team shipped that collect new fields. A policy that was accurate in September 2022 is a snapshot of a system that no longer exists. Pointing at the snapshot is not a defense.
It is the evidence.
What regulators have already done with this
Drizly (2022 and again, against the CEO personally). The FTC alleged security failures and deceptive practices regarding data protection. The remedy included personal liability for the CEO. A named individual, with orders that follow him to his next company. The enforcement signal: the FTC will name executives when the gap between policy and practice is culpable. [M: enforcement names individuals now.]
GoodRx (2023). The FTC’s first Health Breach Notification Rule action alleged that GoodRx shared sensitive health information with advertising platforms while its privacy policy represented otherwise. Settlement: $1.5 million, and a multi-year order. The mechanism of the alleged violation was a Meta pixel. Not a sophisticated breach. A marketing tag the growth team added.
BetterHelp (2023). $7.8 million, FTC. Same fact pattern: a pixel that exceeded the policy’s representations.
Sephora (2022), DoorDash (2024), and a growing line from the California AG. The theme is constant. The state reads the policy, scrapes the site, compares. The gap is the case.
Kochava (ongoing) and the X-Mode / Outlogic line. Regulators scrutinizing location data sold, claimed not to be sold, or claimed to be anonymized. The theory is the same. The policy made a representation. The data flow exceeded it.
The cases are not complex. The evidence comes from your own website and your own contracts. Discovery is a browser and a subpoena.
The four gaps every company has
The processor drift gap. Your engineering, marketing, CX, and product teams adopt new tools continuously. Your privacy team does not. Every new tool is a new processor, or worse, a new recipient of a “sale” or “share” under CCPA/CPRA. Each one requires a DPA, a disclosure update, and often a consent-mechanism update. The disclosures in your policy are a frozen list against a moving system. The list was never right for longer than a month at a time, and it has not been updated in three years.
The pixel and SDK gap. Your marketing stack calls dozens of third-party domains your policy does not name. Some of those calls transmit hashed identifiers, IP addresses, page URLs, and event data to platforms that use the data for cross-site advertising. That is a “sale” and a “share” under CPRA regardless of whether money changes hands, and it triggers an opt-out-of-sale obligation your site almost certainly is not honoring correctly. A single Meta Pixel, configured with default advanced matching, is enough to create the GoodRx fact pattern.
The AI processor gap. Every AI tool your teams adopted in 2024 and 2025 (summarizers, transcribers, support assistants, call coaching platforms, lead scorers, email drafters, research agents) is a processor of personal data in the regulatory sense. Most of them are not in your disclosure list. Many of them lack a signed DPA. Some are consumer-tier accounts, which are not even capable of being processors under the rules. Your privacy policy does not tell users that their support ticket, their voice, their transcript, or their résumé is being processed by an AI vendor. When it comes out, and it will, the disclosure gap is the legal hook.
The purpose-drift gap. Your policy said the data was collected “to provide the service” and “to improve the service.” Your engineering team used it last quarter to train a model. “Training” is not “improving.” Under CPRA and most state regimes, training a model is a separate purpose requiring separate disclosure, and in some jurisdictions a separate opt-out. Under GDPR, it is a change in purpose that requires a fresh lawful basis. Your policy does not cover it. Your emails to users do not cover it. The fine-tuning run was not illegal, but the silence about it is actionable. [S: 'improving’ isn’t 'training.']
If you haven’t compared what your privacy policy promises against what your engineering team actually does with user data lately, it’s a worthwhile afternoon.
Talk to a Talairis attorney →What this means for you
Three things plainly.
The regulators are not waiting for breaches. They are scraping your site. They are reading your policy. They are running the same comparison any competent plaintiffs’ lawyer runs, and they are filing on the delta. Your worst week begins with an email from the California Privacy Protection Agency, not a ransomware note.
The plaintiffs’ bar has found the form. CIPA and wiretap claims against session-replay, chat, and pixel implementations are now a cottage industry. The theory is that your undisclosed interception of user communications violates the state wiretap statute. Settlements have been in the seven-to-low-eight-figure range. The defense costs have been worse. The key predicate is: your policy did not disclose what your site did.
Your policy is evidence against you until you make it evidence for you. You cannot selectively rely on it. The jury gets the whole document. The regulator gets the whole document. The parts you do not like are not severable.
What to do
Four things, in order.
- Do a live data map, not a paper one. Load your site in a clean browser and log every third-party call. Open your production environment and enumerate every processor with access to personal data. Pull every SaaS invoice from finance and ask, of each line item, “does this touch personal data?” Consolidate into a single map that names every processor, every purpose, every data category, every legal basis, and every jurisdiction. Reconcile to your policy. The gaps are the work.
- Rewrite the policy to describe what you actually do, with the categories and purposes your regulators require. Not the aspirational version. Not the 2019 version. The current-state version, written in plain English, disclosing the AI processors, the advertising partners, the session tools, the analytics, the training uses, the retention realities, and the transfer mechanisms. Have counsel who does this weekly review it. The first accurate policy you publish is the one that resets your exposure clock.
- Stand up a change-control process for the policy the way you have one for production code. No new processor onboarded without a DPA and a policy-impact review. No new pixel or SDK added without a consent-mechanism review. No new data purpose introduced without a disclosure update and, where required, a new consent or opt-out. This is a program, not a one-time project. Running it costs less than a single enforcement action.2
- Fix the consent and opt-out mechanisms on the site. Global privacy control signals honored. “Do Not Sell or Share My Personal Information” link present, working, and honored upstream of every ad platform. Cookie banner that actually blocks loading until consent, not one that fires every pixel on page load and logs the consent afterward. Regulators can tell the difference. So can plaintiffs.
Get counsel before the next state AG letter
Privacy is not a legal review of a finished document. It is an operational program with a legal output. The output is the policy, and the policy has to be continuously reconciled against the system.
Before you renew the cookie banner vendor, before you onboard the next AI tool, before you approve the next pixel, before you accept the next SaaS contract, route it through counsel who does this daily and will actually update the disclosure, not just save the contract in the folder. The cost of the review is a small fraction of the cost of being the next named respondent in a consent order.
The FTC and the state AGs have made the doctrine clear. The gap is the case.
A closing thought
Your privacy policy lies.
Not because anyone at your company meant to lie. Because the policy was drafted to describe a system, and the system kept changing, and nobody updated the policy, and the result is a public legal representation that no longer matches the facts.
The regulators and the plaintiffs’ bar have already figured this out. They are running the comparison every week. The only question is whether you get to it first.
Rewrite the policy to match what you actually do. Build the program that keeps it that way. Do it before the letter arrives, not after.
- Privacy teams find out about new processors three ways. The vendor sends a renewal invoice. The user opt-out request lists a name nobody recognizes. The state AG letter cites the pixel by domain. The first two are inconvenient. The third one ends careers. ↩
- Section 5 of the FTC Act is the doctrinal hub. CCPA, GDPR, and the state comprehensive privacy laws bolt on. Each adds elements. None subtracts the basic rule that the disclosure has to match the practice. The gap is the cause of action under all of them. ↩