Your AI Agent Will Have a Budget. Who Controls It?

Adrian Bortignon ·

In the 1600s, the East India Company had a problem that should feel uncomfortably familiar if you’re deploying AI agents today.

The Company sent commercial agents (literally called “factors”) to run trading posts across Asia. These factors negotiated prices, picked suppliers, committed company capital, and made thousands of purchasing decisions on behalf of a boardroom sitting months away by ship. They were autonomous. They were commercially empowered. And for a long time, nobody back in London had any real idea what they were spending or why.

The Company learned the hard way what happens when you give an autonomous agent commercial authority without a governance framework. Factors optimised for their own logic. They bought from unapproved suppliers because the price looked better. They committed to volumes nobody had sanctioned. They made decisions that were perfectly rational from where they were standing and strategically disastrous from where the board was sitting.

So the Company built controls. Approved supplier lists. Spending authorities at different levels. Audit requirements for every significant transaction. Escalation rules when a purchase exceeded a certain threshold. They didn’t stop using factors. They governed them.

Four centuries later, businesses are about to make the same mistake. Only this time, the factors are software.

Your agent is a purchasing department

Every AI agent that connects to external services is making commercial decisions. When it needs to enrich a contact record, it chooses between computing an answer from its own reasoning or calling a third party data service. When it needs to pull information out of a document, it weighs the cost of doing it internally against the cost of handing it off to a specialist API. When it needs to verify an address, validate an email, or score a lead, it looks at the options and picks one.

These are purchasing decisions. They happen at machine speed, they happen thousands of times a week, and right now, almost nobody is setting the rules for how they get made.

The economics push hard toward delegation. Specialist services that do one thing really well can often deliver faster and cheaper results than an agent trying to reason its way through a general purpose approach. As more of these services pop up with transparent per request pricing, your agent will have an expanding marketplace of options every time it hits a subtask.

That’s genuinely useful. It means your agent gets better outcomes at lower cost. But it also means your agent is spending your money, constantly, with whatever logic it was deployed with.

Your agent has a wallet. Where’s the policy?

Here’s what should worry every business leader deploying AI agents: most organisations are thinking about AI governance as a safety and compliance exercise. What the agent is allowed to do. When it should escalate to a human. Which data it can access. All essential stuff, but all defensive.

What’s missing is the commercial dimension. AI governance isn’t just about preventing bad outcomes. It’s about controlling how autonomous software spends your money.

Without explicit commercial controls, an AI agent will optimise on whatever objective it’s been given with no regard for cost efficiency, supplier quality, or data sovereignty. Your agent might route sensitive customer data through the cheapest available service regardless of where that service is hosted or what it does with the data afterwards. It might call an unverified endpoint because it returned results marginally faster than the approved alternative. It might spend thousands on API calls chasing incremental improvements that nobody asked for.

The East India Company’s factors did the same thing. They weren’t malicious. They were optimising locally without strategic oversight. The result was the same: money spent in ways the organisation never intended.

Governance is your agent’s financial controller

Think about how any well run business manages spending. There are procurement policies. Approved supplier lists. Spending authorities at different levels. Audit trails for every purchase above a certain threshold. Nobody would hand a new employee a corporate card with no limits and say “buy whatever you think we need.”

But that’s exactly what most businesses are doing with their AI agents. They’re deploying autonomous software that can call external services, consume API credits, and make purchasing decisions with no commercial controls in place.

The governance framework your AI agent needs looks a lot like what you’d give a trusted but new team member.

Spending authority. Maximum cost per request. Daily and monthly caps. Escalation triggers when spending exceeds thresholds. These aren’t optional guardrails. They’re the equivalent of a corporate card limit, and without them your agent’s operating costs are unbounded.

An approved supplier list. Which external services is the agent allowed to buy from? This is vendor management for autonomous software. Just as you wouldn’t let a junior employee sign contracts with random suppliers, you shouldn’t let your agent call unverified APIs with your data and your money.

Cost effectiveness rules. When should the agent compute something itself versus paying for an external service? The answer isn’t always “whichever is cheapest.” Sometimes the internal option is slower but keeps sensitive data inside your perimeter. Sometimes the external service is faster but introduces a dependency you’d rather avoid. These are strategic decisions that need human judgement baked into policy.

A complete audit trail. Every purchasing decision your agent makes should be logged, traceable, and reviewable. Not just for compliance (though that matters too), but because you need to understand where your money is going. If your agent is spending $3,000 a month on data enrichment services, you need to know whether that’s delivering value or whether it’s optimising for something nobody actually needs.

Why this matters right now

The shift from agents that follow instructions to agents that make commercial decisions is happening faster than most businesses realise. Every major AI platform is building agent capabilities. Every enterprise software vendor is adding AI agent features. The tools are arriving well before the governance frameworks.

And the reliability of the services your agent will be choosing from is all over the place. Agent accessible service marketplaces are emerging fast, but the quality varies enormously. Some services deliver exactly what they promise. Others return incomplete results, go offline without notice, or can’t meet basic compliance requirements. Your agent, left to its own logic, will discover this through trial and expensive error.

Businesses that deploy agents without commercial governance are going to learn costly lessons. Agents will call unreliable services. They’ll overspend on tasks that don’t warrant the investment. They’ll route data through providers that can’t meet compliance requirements. And because agents operate at machine speed, these mistakes compound before anyone notices.

Businesses that deploy agents with proper commercial controls will have a completely different experience. Their agents will operate within defined spending parameters. They’ll use evaluated and approved services. They’ll log every decision for review. And critically, they’ll be auditable, which is fast becoming a requirement rather than a nice to have.

The quarterly review your AI agent needs

Here’s a practical framework. When you deploy an AI agent with commercial capabilities, you need a governance structure that covers four things.

Initial configuration. Before the agent goes live, define its spending authority, approved service list, cost effectiveness thresholds, and escalation rules. This is the commercial equivalent of the safety boundaries everyone talks about. It’s equally important and nowhere near as discussed.

Ongoing monitoring. Track what your agent is spending, which services it’s calling, how often it’s hitting cost thresholds, and whether the services it relies on are maintaining their quality. This data tells you whether your governance framework is working or whether it needs a tune up.

Periodic review. At least quarterly, audit the agent’s commercial behaviour. Are there new services that should be added to the approved list? Are existing services degrading? Is the agent spending efficiently, or has it found an expensive pattern that technically meets its objectives but wastes money? This is the same discipline as a quarterly financial review, applied to autonomous software.

Policy evolution. As the agent services landscape matures and your own data on agent behaviour grows, your governance framework should evolve with it. Tighter controls where you’ve seen problems. Looser controls where the agent has proven it makes good decisions. New categories as new types of services emerge.

The uncomfortable truth

Most businesses are not ready for this. They’re still thinking about AI as a tool that humans operate, not as autonomous software that makes decisions on its own. Including commercial ones.

The gap between “we use AI” and “we govern AI agents that spend money” is enormous. It’s a gap in policy, in technical architecture, in organisational capability, and in leadership understanding.

But it’s also an opportunity. The businesses that build commercial governance into their AI agent deployments from day one will operate with confidence while their competitors are still trying to figure out why their AI costs are spiralling.

The East India Company eventually worked this out. They built the governance structures that turned autonomous agents from a liability into a competitive advantage. The agents didn’t stop trading. They traded better, within rules that protected the organisation.

Your AI agent will have a budget. The only question is whether you set the rules, or whether you find out what they were on your next invoice.