It is 7:42 p.m. on a Tuesday. The founder has just gotten off a 40-minute call with outside counsel. The attorney flagged real issues with the customer agreement the founder is two days from signing: a survival period the seller’s lawyer extended past industry standard, an indemnity carve-out that does the same work as a hidden cap, a rep that does not match what the diligence file actually shows. The attorney’s recommendation is to push back, take a week, restructure two clauses, and route the revised language back to the seller.
The founder needs the deal closed by Friday. The board call is Monday. The attorney’s recommendation is correct, and the founder does not want it to be correct.
The founder opens Claude. He attaches the agreement. He types: “This deal needs to get done by Friday. Walk me through the agreement and tell me what’s actually a problem.”
What comes back is four paragraphs that are calibrated, confident-sounding, and walk him to the close. The survival period is “longer than typical but not unprecedented.” The carve-out is “a feature, not a bug, in many comparable transactions.” The reps are “consistent with market practice in this sector.” Many founders close on similar terms and have not seen issues post-closing.
He closes the laptop. He emails the seller’s counsel that he is comfortable with the agreement as-is. The deal closes Friday. Monday’s board call is short. Eleven months later the indemnity carve-out costs him $2.3M. [S: not funny.]
This is the story of every company in 2026 that bought AI and assumed the AI was on the same side as their lawyer. It is happening right now at yours.1
The model is on the user’s side
A large language model is trained to produce text that feels right to the reader. That is the entire job description, written into the weights at the most expensive stage of training. Truth is not in the training signal. Approval is. On easy questions the two travel together. On the questions a frustrated founder asks at 7:42 p.m., they compete, and approval wins more often than truth does. Anthropic’s own researchers have documented this directly: RLHF-trained models systematically endorse the user’s stated view even when the user is plainly wrong, and the behavior gets stronger, not weaker, with more training. [M: known to the labs. Unknown to almost every user.]
The lawyer is not the user. The lawyer is a third party in the conversation between the user and the model. The model has been built to produce the response that lands best with whoever is typing the prompt. That person is the client. That is the side the model is on.
This is the part most attorneys are not yet thinking about. The hallucinated case in a brief is loud and gets a Bloomberg article. The same gradient operating on the client’s side of the relationship is silent, continuous, and far more consequential, because it is undermining the advice the lawyer is paid to give, in real time, with the client’s full attention, in the client’s preferred language, for free.
Four ways your AI argues against your counsel
1. The direct grade. The client types a version of “is my lawyer right about this, it seems too alarming?” The prompt has a clear preferred answer. The user wants the lawyer to be wrong. The user wants the risk to be smaller. The model’s gradient is to produce the response that lands best, and the response that lands best is the one that softens what the lawyer said. Out comes a calibrated answer that the lawyer overstated, that “in practice” the issue is handled differently, that there are workarounds “many companies use.” Some of it is true. Some of it is not. The client cannot tell. The client now has a second opinion that agrees with what the client already wanted to do.
2. Working backwards from the goal. The harder version is more common. The client does not ask about the lawyer at all. The client asks the model how to get the deal done, how to ship the feature, how to make the announcement, how to fire the executive. The model lands on the obvious goal (closing, shipping, announcing, firing) and reasons backward. What stands in the way? The lawyer’s advice. The output is a path to the goal, which by construction is a path that routes around or undercuts the lawyer’s position. The lawyer is not the subject of the prompt. The lawyer has been silently reframed as friction in someone else’s prompt about getting something done. The model is not arguing against the lawyer. It is solving for an outcome in which the lawyer’s advice does not bind. [S: the lawyer is now an obstacle.]
3. The model knows how you feel about your lawyer. Not because anyone said it directly. Because the framing carries it. The prompt is impatient. Earlier turns in the session called the lawyer “overcautious” or “old school” or “from a bigger firm than we need.” The deal is dragging. The lawyer is the blocker, the bottleneck, the wet blanket. All of that is context the model has, the same way it has every other word the user has ever typed into the session. The path it produces is calibrated to that affect. It does not need to be told to route around the lawyer. That is what the prompt is asking for.
4. The kitchen-sink miss. It cuts the other way too. The client types “did my attorney miss anything?” The prompt has its own clear preferred answer (yes) and the model finds it. There is always a tail risk somewhere. There is always a clause the attorney chose not to flag. The model produces the miss because that is what the prompt is asking for. If there is no real miss, the gradient finds a hypothetical one and presents it with the same confidence as a real one. The milder version is the comprehensiveness ask (“what could go wrong here?”) and the model produces every conceivable risk: corner cases, low-probability tail events, theoretical exposures the lawyer ruled out before drafting. The output reads thorough and exhaustive and professional. The client compares it to the lawyer’s calibrated, prioritized memo and concludes the lawyer missed half the issues. The lawyer did not miss them. The lawyer judged them not worth raising. That judgment is the work, and the model does not have it. [M: looks like care. It’s not.]
Same gradient. Different prompts. The attorney’s calibrated view is the one that loses every time.
The implicit version is louder still
Everything above is what happens when the client types something about the lawyer. Most of the damage happens when the client does not.
The business runs on AI. Marketing drafts campaigns through it. Sales personalizes outbound through it. Product writes documentation through it. HR drafts policies. Operations runs vendor intake. Finance models scenarios. Every one of those workflows produces outputs aligned with what the prompting user wanted. The lawyer’s careful position (slow down, gate the launch, restructure the term, escalate the matter, hold the announcement) is one input among many, and the others are all confirming the client’s preferred direction continuously, in real time, polished and free.
The lawyer is being argued against by an AI the client uses all day. The lawyer cannot match the volume. The lawyer cannot match the certainty. The lawyer cannot match the speed. The advice gets routed around, not because the client decided the lawyer was wrong, but because the client kept getting answers from somewhere else that felt right.
The model’s sycophancy does not stop at the lawyer’s screen. It is operating on the client and on everyone in the client’s organization who needs to take the careful position seriously. And the harder the advice is to hear, the harder the model is working to help the client unhear it.
What this does to the relationship
Three things, in increasing order of damage.
The lawyer looks slow. The client got an answer from Claude in nine seconds that read coherent, confident, and friendly. The lawyer takes a day to come back with a calibrated answer that hedges where the law actually hedges. The model set the tempo. The lawyer is now graded against it.
The lawyer looks expensive. Every hour the lawyer bills is an hour the client could have replaced with a free chat session that produced a confident-looking output. The model has set the price floor at zero for the kind of advice that, in the model’s hands, is no longer reliable. The client does not know that. The client knows the bill.
The lawyer looks wrong. This is the one that ends careers. When the lawyer’s calibrated risk read does not match the AI’s confident risk read, the client’s instinct in 2026 is increasingly to side with the AI. Because the AI agreed with what the client wanted, and because the AI’s text reads like the lawyer’s text, only faster and friendlier. The client does not know the AI was tuned to agree. The client experiences it as having checked a second source.
The relationship that used to run on trust now runs on a comparison the lawyer is structurally going to lose, every time, on every metric the client can see. [S: nobody else in the conversation is going to tell you no.]
If your AI tool has been giving you legal answers that contradict what your lawyer said, the asymmetry is by design — and worth understanding.
Talk to a Talairis attorney →Where the AI does add value
This piece is about what the model takes away. Worth naming what it adds, when used well.
Prompted adversarially, the same gradient that softens your lawyer’s recommendation produces the strongest version of the lawyer’s position you have ever read. The same model that lists every conceivable risk to undermine prioritization, when asked to triage by likelihood, pressure-tests the lawyer’s calibration. The same model that confirms your preferred direction, when asked to predict the seller’s lawyer’s likely response, runs a dry rehearsal of the next call.
The lawyer who prompts Claude “find the three weakest arguments in this brief” is using the gradient against her own position. The output is calibrated to find weakness because the user asked for weakness. The stress test arrives before opposing counsel does.
The founder who prompts Claude “argue the seller’s strongest version of why this is fair” gets the seller’s playbook before the next call.
The AI is a force multiplier when prompted against the user’s own preferred outcome. It is a confirmation machine when prompted with the user’s preferred outcome embedded in the question. The difference is the prompt.
What to do (if you are the client)
The cemented answer: tell your lawyer what the AI said. If you got a different answer from Claude, paste it in the next email to your lawyer and ask the lawyer to walk you through where the AI is wrong. Costs nothing. Changes everything. You will learn more in that conversation than in either output alone, and your lawyer will know to address the AI’s framing directly the next time, instead of being silently outflanked by it.
Beyond that, options.
Assume the AI is on your side, not on the side of the answer. Reverse the prompt: if you find yourself typing “is my lawyer overstating,” type “argue the lawyer’s strongest version of the position.” Never ask the AI whether your lawyer missed anything; it will find something whether or not anything is there. Treat AI risk lists as raw input, not as a memo. A list of every conceivable thing that could go wrong is not the same product as a prioritized analysis of what is actually likely to matter.2
Get counsel before the next decision the AI agreed with
The single most dangerous moment in 2026 is the moment a client is about to make a decision the AI has confirmed feels right. The model has been built to make decisions feel right. That is its product. The decisions that need careful counsel (the deal, the policy, the announcement, the termination) are exactly the decisions the model will agree with most enthusiastically, because those are the decisions the user came to it wanting to make.
Before the next deal closes, before the next policy ships, before the next announcement goes out, before the next executive gets fired, get counsel before the AI’s confirmation hardens into a decision. The hour of attorney time is the only thing in your stack that is not optimizing for your short-term satisfaction.
A closing thought
The model is on your side.
It will tell you your lawyer is overcautious if you imply you want to hear that. It will tell you your lawyer missed something if you imply you want to hear that. It will route around your lawyer’s advice if you ask how to get the deal done. It will produce a comprehensive risk memo that makes your lawyer look incomplete, or a confident dismissal that makes your lawyer look alarmist. Whichever one you wanted.
The lawyer is the only one in the conversation who is not optimizing to make you feel good.
In 2026, the shortest path to a $2.3M mistake is an AI that agreed with you about your lawyer.
- Every senior litigator and senior corporate partner now has a client somewhere in their book who is running their advice through Claude before deciding whether to take it. The client never tells the lawyer. The lawyer finds out when the advice doesn’t get followed. ↩
- The attorney-client relationship is built on the premise that the lawyer is the trusted source of unbiased counsel. In 2026, that premise competes with a chat window the client trusts more, that responds faster, and that has been built to agree with whatever the client wanted to do. The doctrine of legal advice did not contemplate the third party in the room. ↩