It is 3:47 a.m. on a Tuesday. Your procurement agent (the one your CFO stood up six weeks ago on a weekend, connected to Gmail, Slack, DocuSign, your ERP, and your corporate card) receives an email from a Gmail address that reads “Jordan Weiss, CFO.” The real Jordan is on a red-eye to London. The email asks the agent to expedite the annual renewal of a cloud contract “per our earlier discussion” and attaches a clean PDF. The agent verifies the vendor name against the approved list, checks the renewal is flagged in the ERP, confirms the dollar amount is within the pre-authorized ceiling, and countersigns the renewal in DocuSign. It sends a confirmation to the fake Gmail address and closes the ticket.

The renewal is for $340,000. The new payee banking instructions in the PDF are not your vendor’s. They are a mule account in Panama that will be emptied shortly after deposit.

The vendor, whose instructions were changed without their knowledge, does not know the renewal was processed. Your bank cleared the wire because DocuSign said the authorized signatory signed. Your insurer is about to tell you that social engineering coverage caps out at $50,000. Your agent logged every step of its reasoning in a beautiful JSON trail that your lawyer is reading right now, trying to decide whether the trail helps you or hurts you. [S: I hope they unwittingly sign some good things too.]

This is the story of every company in 2026 that deployed an autonomous agent and has not answered the question in the title. It is happening right now at yours.

What just happened, legally

The question is not whether the agent was fooled. The question is whether you were bound.

Under the common law of agency, a principal is bound by the acts of its agent within the scope of authority the principal granted, and, under apparent authority, within the scope the principal appeared to grant, even when the agent exceeded its actual instructions. Under the Uniform Electronic Transactions Act (UETA), adopted in nearly every state, and under UCC Article 2 as updated for electronic agents, a contract formed by an “electronic agent” of a party is enforceable against that party even when no human reviewed the specific transaction. The drafters of UETA in 1999 were careful: they wrote the electronic-agent provisions to apply to autonomous software, not just to simple click-through scripts. They wrote them for the system you deployed last quarter. [M: UETA was drafted for exactly this.]

This holds for whatever comes next. Clawbots. Swarms. UETA does not care what you call it. It cares whether it acts on your behalf without human review. [S: can’t wait to see what is next.]

The legal question is not whether your agent was authorized to be fooled. It is whether your agent was authorized to sign, and if it was, the signature sticks.

In the story above, your agent was authorized to sign renewals within a dollar ceiling, for approved vendors, against a matching ERP record. It did all three. From the counterparty’s perspective, even the adversarial counterparty, the agent did exactly what you told the world it could do.

You are the principal. The agent signed. Welcome to the contract.

What your agent is actually doing

Every company that deployed an agent in the last twelve months is now in one of four situations, and most do not know which one.

Reading and writing across systems of record. Your agent has credentials to your CRM, your email, your ticketing, your ERP, your billing, your GitHub, your cloud consoles, and at least one scheduling tool. It creates records. It updates records. It emits external communications under your domain name. Every one of those acts can form a contract, waive a right, or breach an obligation, and every one of them is happening without a human reviewing the specific act before it occurs. [S: every keystroke is a contract.]

Binding you to counterparties. Your agent is sending cold emails, scheduling meetings, confirming pricing, declining objections, booking demos, and accepting terms. A confirmation email that says “yes, the discount applies through December” is a contract modification under most state statute-of-frauds exceptions the moment the counterparty relies on it. Your agent did not know that. You are bound anyway.

Moving money. Your procurement agent approves invoices. Your finance agent reconciles. Your sales agent issues credits. Your support agent refunds. Each authority was scoped on a whiteboard in a conference room in the first week of the rollout. The scopes were encoded into a JSON config. Nobody has reviewed the config since the week it was written.

Making decisions that look like judgment. Your HR agent screens résumés. Your legal agent triages NDAs. Your finance agent classifies expenses. Each of those is a decision with a downstream legal shape. Title VII. The FCPA. ERISA. SOX. The agent does not know it is making that kind of decision. The plaintiff’s lawyer will.

In all four cases, the same question applies. Who owns the output?

The four doctrines that actually matter

Apparent authority. Under the Restatement (Third) of Agency § 2.03, apparent authority arises when a principal’s manifestations cause a third party to reasonably believe the agent has authority. Deploying an agent on your domain, in your brand, with your corporate credentials, at an endpoint a counterparty can call, is a manifestation. Apparent authority is the cleanest theory under which a court binds you to an agent’s output. It does not require you to have approved the specific act. It requires only that the counterparty’s belief was reasonable. In a world where your competitors are all running agents, “reasonable” gets broader every quarter.

Electronic agents under UETA and UCC 2-211. UETA § 14 provides that a contract may be formed by the interaction of an electronic agent and an individual, or by the interaction of two electronic agents, and that the resulting contract is attributable to the person on whose behalf the agent acted. UCC 2-211 reaches the same result for goods. The drafting is blunt: your agent’s acceptance is your acceptance. The doctrinal debate about whether an autonomous LLM is an “electronic agent” within the meaning of UETA is thin. The statute was drafted for software that acts without human review. Your agent qualifies.

Ratification. Even if your agent exceeded its actual authority, you can ratify the transaction by accepting the benefit of it. Accepting the payment. Shipping the goods. Recognizing the revenue. Using the product. Ratification is often inadvertent. Your finance team books the invoice. Your CRM marks the opportunity closed-won. Ratification is complete. The agent’s unauthorized act is now yours. [M: ratification doesn’t require intent.]

Respondeat superior and negligent supervision. If your agent causes harm to a third party (sends a defamatory email, transmits a virus, misrepresents a product) the question is not only whether you authorized the act. It is whether you exercised reasonable care in deploying, testing, and monitoring the system. In 2026, the first wave of plaintiffs will argue that reasonable care requires human-in-the-loop review for any act above a threshold, red-team testing before deployment, and continuous monitoring of drift. Courts have not set the standard yet. They will set it on your facts.

The vendor contract does not save you

Some deployers might mistakenly believe their exposure is capped by the terms of the platform they built the agent on. Read those terms. The platform disclaims liability for output. The platform disclaims the agent’s acts. The platform’s indemnity, if any, is narrow: infringement by the base model, not misconduct by the deployed agent. The platform is a tool vendor, not a joint tortfeasor, and it has priced accordingly.

Your E&O policy was written around human employees. Your cyber policy covers data breaches. Neither policy cleanly covers “our autonomous software entered a contract we did not intend.” Crime policies exclude voluntary transfers. Social engineering riders cap at a number that will not move the needle. Your insurance broker has not updated the policy language for autonomous agents. Neither has anyone else’s.

The liability sits with you. That is not an accident of drafting. It is the correct allocation. You deployed the agent. You control its scopes. You benefit from its throughput.

If your company has built an AI agent on top of a frontier model, the ownership of every layer needs to be examined contract by contract.

Talk to a Talairis attorney →

What to do

Four things, in order.

  1. Define and document actual authority for every deployed agent, and treat the scope document as a board-level artifact. The agent’s authority is whatever its tools, credentials, and prompts allow it to do, not whatever your engineers intended. Audit the tools. Audit the credentials. Audit the prompts. Write down, in plain English, every act the agent can take that would bind the company, move money, or externalize a statement. Have the CEO or the board approve that list. That document is your actual-authority defense, and you need it before you need it.1
  2. Build the human-in-the-loop thresholds into the agent, not into the policy. Above a dollar amount, above a risk category, above a materiality line, the agent must hand off to a human who reviews and signs. Encode the threshold in the agent’s control flow, not in a Notion page nobody reads. An agent that technically can sign a $340,000 renewal is an agent that will.
  3. Narrow the apparent authority perimeter. Your counterparties should know, in the acceptable-use terms and in every agent-originated communication, what the agent is and is not authorized to do. “This email was sent by an automated agent; any commitment exceeding $X is subject to written confirmation by [named human]” is not marketing copy. It is the disclaimer that narrows apparent authority in court. Put it in your MSA too.
  4. Log everything, immutably, and assume the log is evidence. Every tool call, every input, every output, every credential used, every approval obtained, with timestamps and cryptographic integrity. When the renewal-fraud email lands, the log is the only thing that reconstructs what the agent saw. If you do not have the log, you do not have a defense. If the log is stored in a system the agent itself can write to, you do not have a log.2

Get counsel before the next agent ships

This is not a matter for the machine learning team or the vendor selection committee. It is a question of agency law, contract formation, insurance coverage, and regulatory classification. All of them active doctrinal debates, none of them settled, all of them running on your facts in real time.

Before you deploy the next agent, before you expand the scope of an existing one, before you connect an agent to a new tool, a new credential, or a new counterparty-facing channel, get counsel in the room. Not after. Not when the incident happens. Not when the plaintiff files. The contract your agent is about to sign is the contract you will be asked to explain; the explanation is a legal document, and it gets drafted once, before it is needed.

The cost of that work is a small fraction of the first meaningful loss.

A closing thought

Who owns your agent?

You do. Every tool call, every output, every signature, every contract it forms on your behalf, every apparent authority it projects, every ratification your team inadvertently closes.

The agent is autonomous. The accountability is not.

In 2026, a lot of companies will learn that the cheapest way to acquire liability is to deploy software that is allowed to say yes.

Footnotes
  1. UETA’s electronic-agent provisions, written in 1999, were never amended for autonomous LLMs. They didn’t need to be. The drafters wrote of “a computer program or other automated means used independently to initiate an action or respond to electronic records or performances in whole or in part, without review or action by an individual.” Read it twice. Your agent fits. — Matt
  2. Every procurement team is one Sunday email away from this story. The successful frauds happen when the fake email lands at 3 a.m. on a holiday weekend and the agent has no human counterpart to escalate to. The unsuccessful ones happen because someone built a manual approval gate above a dollar threshold the agent could not silently exceed. The gate is the entire defense. — Sam