AI Governance Update: What Changed in Early 2025
The first weeks of 2025 brought a flurry of AI governance developments. If you’ve been heads-down on delivery and haven’t tracked the regulatory landscape, here’s what you’ve missed and why it matters.
The EU AI Act Timeline Accelerates
The EU AI Act is now in force, with compliance deadlines approaching faster than many anticipated. Key dates:
- February 2025: Prohibited AI practices become enforceable
- August 2025: Rules for general-purpose AI models apply
- August 2026: Full requirements for high-risk AI systems
For Australian enterprises selling into Europe or processing EU citizen data, these aren’t distant concerns. They’re operational requirements with significant penalties for non-compliance.
Australia’s Voluntary Framework Gets Teeth
The Australian government has moved from voluntary AI principles to something with more substance. The Department of Industry announced in January that government procurement will require adherence to AI safety standards from mid-2025.
This matters because government procurement often leads private sector expectations. If you’re selling to government, or expect to, your AI governance needs to meet these emerging standards.
More significantly, the mandatory guardrails consultation is progressing. While specifics aren’t finalised, the direction is clear: Australia is moving toward enforceable AI rules, not just guidelines.
The US Executive Order Implications
President Biden’s executive order on AI safety is producing practical requirements. NIST’s AI Risk Management Framework is becoming the de facto standard, and major US enterprises are requiring supplier compliance.
Australian companies in global supply chains are being asked to demonstrate AI governance alignment with these frameworks. It’s no longer optional for companies with US enterprise customers.
What This Means Practically
Three practical implications:
Documentation requirements are increasing. You need to be able to explain what AI you’re using, how it works, and what safeguards are in place. “We use ChatGPT for some things” isn’t sufficient anymore.
Risk assessment is becoming mandatory. Understanding which of your AI applications are high-risk (affecting employment, credit, safety) and treating them accordingly is no longer best practice – it’s baseline expectation.
Third-party AI needs governance too. Using Microsoft Copilot or Salesforce Einstein doesn’t exempt you from governance requirements. You’re responsible for AI in your environment regardless of who built it.
The Compliance Gap
Here’s what I’m seeing: most Australian enterprises have AI governance that was designed for the 2023 landscape. That’s not adequate for 2025 requirements.
Specific gaps I’m encountering:
- No inventory of AI systems in use across the organisation
- Risk assessments that haven’t been updated since initial deployment
- No process for evaluating third-party embedded AI
- Documentation that exists but isn’t maintained
- Governance that covers internal projects but not purchased tools
If any of these sound familiar, you’ve got work to do.
Getting Current
A practical approach to updating your AI governance:
Week 1-2: Inventory all AI in use. Include obvious tools (ChatGPT, Copilot) and embedded AI in existing software. This is harder than it sounds – AI is increasingly invisible.
Week 3-4: Risk-tier your inventory. Which systems affect consequential decisions? Which handle personal data? Which are customer-facing? These need more attention.
Week 5-6: Gap assessment against emerging requirements. Compare your current governance to EU AI Act requirements, Australian guidelines, and NIST framework. Where are you falling short?
Week 7-8: Remediation planning. Prioritise gaps by risk and effort. Some will be quick fixes; others will require significant work.
Ongoing: Build monitoring and update processes. Governance isn’t a project – it’s an operating capability.
The Cost of Waiting
Some enterprises are treating AI governance as a 2026 problem. That’s risky for several reasons:
- Retroactive compliance is harder and more expensive than building governance into deployments
- Customer and partner requirements are advancing faster than regulation
- Incidents that would have been embarrassing become regulatory violations
- The governance talent market is tightening; waiting means competing harder for resources
The organisations that get ahead of this will have a competitive advantage. The ones that wait will be playing catch-up under pressure.
Final Thought
AI governance used to be something you did because it was responsible. Now it’s something you do because it’s required. That’s a significant shift, and the practical implications are still working through most organisations.
The good news: if you’ve been doing sensible governance, you’re probably closer to compliance than you think. The adjustments are mostly about documentation, process formalisation, and extending coverage to embedded AI.
The concerning news: if you’ve been treating governance as optional, the gap you need to close is substantial and widening. The time to act is now.