Why Enterprise AI Budgets Keep Blowing Up (And It's Rarely the Technology)


Last month, I sat in a boardroom watching a CFO’s face turn an alarming shade of red. Their AI project—originally scoped at $2.3 million—had just hit $4.8 million with no end in sight. The CIO was scrambling to explain. The vendor was defensive. And I was thinking: here we go again.

I’ve reviewed 47 enterprise AI implementations over the past three years, mostly for ASX-listed companies and large government departments. Want to know how many came in on budget? Six. That’s a 13% success rate. And here’s the thing that keeps me up at night: in almost every case, the technology worked exactly as promised.

The real culprits

The budget blowouts I see aren’t caused by GPUs catching fire or algorithms failing. They’re caused by humans doing what humans do: underestimating complexity, avoiding difficult conversations, and assuming everyone’s on the same page when they’re not even reading the same book.

Change management gets treated like an afterthought. I can’t count how many times I’ve seen a $5 million AI project with a $50,000 change management line item. That’s like buying a Ferrari and skimping on the tyres. A major retail client recently spent $3.2 million on a demand forecasting system that their category managers refused to trust. Why? Because nobody asked them what they needed, showed them how it worked, or addressed their very reasonable fear that AI was coming for their jobs. The system sits there, technically perfect, commercially useless.

Scope creep is baked into the process. Australian enterprises love a good business case, but I’ve yet to see one that accurately captures what happens when you start pulling on AI threads. You think you’re building a chatbot for customer service. Then marketing wants in. Then sales. Then HR realises they could use the same platform. Suddenly your $800K proof of concept is a $3.5M enterprise-wide deployment with 14 stakeholders who all have “non-negotiable” requirements. McKinsey’s research shows that scaling pilot projects to production is where most AI initiatives stumble—not because the tech doesn’t scale, but because the organisations don’t.

Data quality is someone else’s problem. Until it isn’t. I watched a financial services firm spend six months building a beautiful AI model before discovering their customer data had 23% duplication and inconsistent formatting across seven legacy systems. They’d budgeted two weeks for data prep. They needed six months. The Team400.ai teams who actually succeed are the ones who start with data audits, not algorithms.

The uncomfortable truth about estimates

Here’s what I tell clients that they don’t want to hear: your vendor’s estimate is probably wrong, but not because they’re dishonest. It’s wrong because you don’t know what you need yet. You think you do. You’ve written a 40-page requirements document. But until your people actually start using AI in their workflows, you won’t know what the real requirements are.

The best implementations I’ve seen treat the initial budget as phase one, with explicit assumptions about what’s included and what’s not. They build in discovery sprints. They accept that requirements will evolve. They create governance structures that can make fast decisions when (not if) things change.

A manufacturing client did this brilliantly last year. They budgeted $1.8M for an AI-powered quality control system, but structured it in three phases with clear decision points. Phase one came in $200K over budget because they discovered integration requirements nobody anticipated. But because they’d built in flexibility, they could adjust phase two and still deliver the core value. Final cost? $2.1M—17% over, but delivered on time with strong user adoption.

What actually works

The projects that succeed financially do three things differently:

They start with the people problem, not the technology solution. What are your staff spending time on that they shouldn’t be? What decisions are bottlenecked? What information takes too long to find? If you can’t articulate the people problem in one sentence, you’re not ready for AI.

They include operational staff in scoping. The best requirements don’t come from executives or consultants—they come from the people doing the work. Yes, it’s slower. Yes, it’s messier. But it’s cheaper than rebuilding everything six months in.

They budget for the messy middle. That’s the phase between “it works in the demo” and “people actually use it daily.” It includes training, troubleshooting, refinement, and handling the inevitable surprises. Smart organisations budget 30-40% of the total project cost for this phase.

The path forward

Australian enterprises aren’t bad at AI. They’re bad at organisational change that happens to involve AI. The technology is remarkably reliable now—it’s our institutions and processes that need upgrading.

Next time you’re reviewing an AI business case, don’t ask “will the technology work?” Ask “have we talked to the people who’ll use this daily?” Ask “what happens when requirements change?” Ask “who’s responsible for adoption, not just delivery?”

Your budget will thank you. And the CFO’s blood pressure will remain at healthy levels.