We Reviewed 50 AI Vendor Proposals. Most Were Terrible


Over the past eighteen months, I’ve helped procurement teams at mid-to-large Australian companies evaluate AI vendor proposals. We’ve now reviewed more than fifty. The honest summary: about 70% of them weren’t worth the PDF they were delivered in.

That’s not hyperbole. It’s a pattern. And it’s costing Australian businesses real money—not just in failed projects, but in the months of delayed decisions while teams wade through vague, template-driven proposals that all sound identical.

The Copy-Paste Problem

The most common issue? Proposals that could have been written for literally anyone. Swap out the company name and the industry keyword, and you’ve got the same pitch sent to a Perth mining company, a Sydney insurer, and a Melbourne retailer.

We saw one proposal—from a firm charging $380,000 for a “discovery phase”—that still had another client’s name in the executive summary. That’s not a typo. That’s a vendor who doesn’t care enough to proofread a document they’re expecting you to sign.

A McKinsey study on AI implementation found that fewer than 25% of enterprise AI projects deliver the expected value. Bad vendor selection is a major contributor. When the proposal itself is lazy, what does that tell you about the actual delivery?

Five Red Flags We Saw Repeatedly

After fifty reviews, certain patterns predict trouble with striking reliability.

1. No mention of your existing systems

If a vendor’s proposal doesn’t reference your current tech stack by name—your ERP, your CRM, your data warehouse—they haven’t done their homework. Integration is where most AI projects get expensive and slow. A proposal that glosses over integration complexity is either naive or deliberately hiding the true cost.

2. ROI projections with no methodology

We saw proposals promising “$4.2 million in annual savings” with zero explanation of how they calculated that number. When we pressed one vendor, they admitted it was based on “industry benchmarks from a US market study.” For an Australian aged care provider. The economics aren’t even comparable.

3. Timelines that ignore change management

A six-week implementation timeline for an AI system that will change how 200 people do their daily work? That’s not ambitious. That’s fiction. The technology might be configured in six weeks. Getting your staff to actually use it properly takes three to six months minimum—and that’s if you invest in training.

4. The team is TBD

“Our delivery team will be confirmed upon engagement commencement.” Translation: they haven’t staffed the project yet, and whoever’s available will get assigned. In consulting, the team is the product. If you can’t meet the people who’ll actually do the work before you sign, walk away.

5. Vague IP and data ownership clauses

This one’s genuinely dangerous. Several proposals we reviewed contained clauses that gave the vendor rights to use your data to train their models for other clients. Buried on page 47 of the terms and conditions. If your legal team isn’t reading every line of the data provisions, you’re exposed.

What Good Proposals Actually Look Like

The top 15% of proposals we reviewed shared some common traits. They were shorter, for one thing. The best proposal in our entire review was 14 pages. The worst was 93.

Good proposals demonstrate understanding of the specific problem before proposing a solution. They name the constraints and trade-offs honestly. They include a phased approach with clear decision points where you can pause, redirect, or stop entirely without losing everything you’ve invested.

They also talk about what won’t work. Any vendor willing to tell you “this approach probably isn’t right for your situation” has earned more credibility than ten glossy case studies.

When working with firms offering AI implementation help or any specialist advisory firm, it’s worth having them review vendor proposals independently before you shortlist. A few hours of expert scrutiny can save months of pain. We’ve seen procurement teams catch major issues—unrealistic architecture assumptions, missing security requirements, hidden licensing costs—simply by having a technically literate third party read the fine print.

A Simple Scoring Framework

Here’s a lightweight framework we’ve been using. Score each proposal from 0 to 3 on these six dimensions:

  • Problem specificity: Does the proposal address your actual problem, or a generic version of it?
  • Technical credibility: Are the proposed architectures realistic given your infrastructure?
  • Team transparency: Can you see who’ll do the work, and are their credentials verifiable?
  • Commercial clarity: Is the pricing complete, or will there be “additional costs” later?
  • Risk honesty: Does the vendor acknowledge what could go wrong?
  • Exit provisions: Can you walk away at defined points without losing your investment?

Maximum score is 18. In our review, the average was 7. Only three proposals scored above 14.

The Bigger Picture

Gartner estimates that by 2027, enterprises will spend more on AI consulting than on the AI technology itself. That’s an extraordinary amount of money flowing into an industry where proposal quality is, frankly, poor.

Australian CIOs are under real pressure to show AI progress. The temptation to pick a vendor quickly is understandable. But rushing vendor selection is how you end up twelve months later with a $600,000 invoice and a proof of concept that never made it to production.

Demand specificity. Insist on meeting the actual delivery team. Read the data clauses. And don’t be afraid to reject every proposal in a round and start again. The right vendor is worth waiting for.