AI Transformation: Lessons from Two Years In


It’s been about two years since generative AI went mainstream and enterprises began serious AI transformation efforts. That’s enough time for some perspective on what we’ve learned.

These reflections come from watching dozens of organisations navigate this period, combined with conversations across the Australian enterprise community.

What We Got Right

AI Is Genuinely Useful

The sceptics who predicted AI would be a passing fad were wrong. AI, particularly large language models, delivers real value for real tasks:

  • Document processing and analysis
  • Code generation and review
  • Customer service augmentation
  • Content drafting and editing
  • Data analysis and interpretation

These aren’t theoretical benefits. They’re happening in production, at scale, with measurable impact.

Platform Approaches Won

Building custom AI from scratch turned out to be mostly unnecessary. Platform capabilities (Azure AI, Bedrock, Vertex) matured faster than expected.

The organisations that bet on platforms over custom development generally got to value faster with lower cost and complexity.

Governance Matters

Early AI deployments without governance created problems. Data leakage, inappropriate outputs, compliance issues – the risks were real.

Organisations that invested in governance early avoided these problems and built sustainable foundations for scaled AI adoption.

Skills Are the Constraint

The limiting factor wasn’t technology – it was people. Organisations with AI-capable teams moved faster. Those without struggled regardless of their technology investments.

The skills gap is real and remains the primary constraint on enterprise AI value.

What We Got Wrong

Transformation Timelines Were Optimistic

Most AI transformation roadmaps I saw in 2023 predicted dramatic changes by 2025. Those predictions haven’t materialised.

AI adoption is following normal enterprise technology adoption curves, not revolutionary transformation curves. Change takes longer than hoped.

Productivity Gains Were Overstated

The “30-40% productivity improvement” claims haven’t been validated at scale. Real productivity gains are meaningful but more modest – typically 10-20% for well-implemented use cases.

The gap between demonstrated and claimed productivity impact created credibility problems for AI initiatives.

Autonomous Agents Aren’t Ready

The vision of AI agents independently handling complex work remains mostly unrealised. Current agents can automate simple, well-defined tasks. Complex, judgment-requiring workflows still need human involvement.

This isn’t a failure – it’s a recalibration of expectations to match current capability.

ROI Measurement Remains Difficult

Demonstrating AI value is still hard. Most organisations can point to qualitative improvements but struggle with rigorous ROI quantification.

This measurement difficulty makes budget justification challenging, particularly as financial pressure increases.

Surprises

Adoption Was Faster Than Expected

Despite change management challenges, basic AI tool adoption happened faster than historical patterns for enterprise software.

People wanted to use AI tools. The adoption challenge was capability development and governance, not willingness.

Vendor Consolidation Happened Quickly

The AI startup landscape consolidated faster than anticipated. Platform providers absorbed or marginalised many standalone AI tools.

The “many small vendors” future some predicted hasn’t materialised. Platform dominance is increasing.

Resistance Came from Unexpected Places

Expected resistance from workers worried about job loss was less than anticipated. Unexpected resistance came from middle management uncertain about their role in AI-enabled organisations.

The change management challenge was different from what most organisations prepared for.

What We’re Still Figuring Out

How to Measure Value

ROI measurement for AI remains unsolved. Productivity gains, quality improvements, and enablement benefits are real but difficult to quantify rigorously.

Better measurement frameworks are needed.

How to Govern at Scale

Governance approaches that work for a few AI projects strain under hundreds of AI applications. Scaled governance models are still evolving.

Where Humans Fit

The human-AI collaboration model is still being defined. What should AI do versus humans? How do we organise work around AI capability?

Most organisations are experimenting rather than having answers.

What’s Coming Next

AI capability continues improving. What’s possible in 2027 may be quite different from today. Planning for that uncertainty is challenging.

Advice Based on Experience

For Organisations Early in AI Adoption

  1. Start with proven use cases. Document processing, code assistance, customer service – begin where others have demonstrated success.

  2. Invest in platforms, not custom builds. Platform capabilities will improve faster than you can build.

  3. Establish governance early. It’s easier to build in from the start than retrofit later.

  4. Budget realistically. Plan for 12-18 month journeys to meaningful value, not 3-6 month transformations.

  5. Focus on skills. Your limiting factor will be people, not technology.

For Organisations Further Along

  1. Consolidate and optimise. Rather than more pilots, get value from what you have.

  2. Build measurement capability. Without rigorous ROI evidence, future investment is at risk.

  3. Develop internal expertise. Reduce external dependency while capability is available.

  4. Plan for governance scale. Current approaches may not work as AI proliferates.

  5. Maintain perspective. AI is useful technology, not magic transformation. Calibrate expectations.

Looking Forward

The next two years will likely involve:

  • Continued capability improvement in AI models
  • Platform consolidation and maturation
  • Increasing regulatory requirements
  • Better measurement and governance frameworks
  • More nuanced understanding of human-AI collaboration

The enterprises that will thrive are those that treat AI as serious technology adoption requiring sustained investment, not those seeking quick transformations or waiting for certainty.

Final Thought

Two years into enterprise AI transformation, the picture is neither the revolution that hype promised nor the failure that sceptics predicted.

AI is useful, valuable, and increasingly important. It’s also harder to implement well than anticipated, slower to deliver value than hoped, and more complex to govern than expected.

That’s a realistic assessment. Building on realistic foundations is how lasting capability gets built.

The work continues.