The EU AI Act: What Australian Businesses Need to Know


The EU AI Act entered into force in August 2024, with provisions rolling out through 2025 and 2026. If you’re an Australian business thinking this doesn’t apply to you, think again.

Here’s what you need to know.

Why Australian Companies Should Care

The Act applies to:

  • AI systems placed on the EU market
  • AI systems used in the EU
  • AI systems whose outputs are used in the EU

That means if you:

  • Have customers in the EU
  • Use AI to process data about EU residents
  • Provide AI-powered services accessible from the EU
  • Operate subsidiaries in the EU

You’re likely in scope.

Sound familiar? This is the “Brussels Effect” – the EU’s regulatory approach that effectively becomes global because it’s easier to comply universally than to have different versions for different markets.

The Risk-Based Framework

The Act categorises AI systems by risk level:

Unacceptable Risk (Banned)

These AI applications are prohibited:

  • Social scoring by governments
  • Real-time remote biometric identification in public spaces (with limited exceptions)
  • Manipulation that exploits vulnerabilities
  • Emotion recognition in workplaces and schools

Most Australian businesses won’t encounter these, but if you’re using AI for workplace monitoring or customer manipulation, take notice.

High Risk (Heavy Regulation)

AI used in these areas faces strict requirements:

  • Critical infrastructure (energy, transport, water)
  • Education and vocational training
  • Employment, worker management, access to self-employment
  • Access to essential services (credit, insurance, public benefits)
  • Law enforcement
  • Migration and border control
  • Justice system

If your AI influences hiring decisions, credit scoring, or access to services, you’re likely in high-risk territory.

Limited Risk (Transparency Requirements)

AI systems that interact with people must disclose they’re AI:

  • Chatbots must identify as non-human
  • AI-generated content must be labelled
  • Emotion recognition must be disclosed

Minimal Risk (No Specific Requirements)

Most AI applications – spam filters, recommendation systems, etc. – face no specific regulation under the Act.

High-Risk Requirements in Detail

If your AI is classified as high-risk, you must:

Risk management: Implement a risk management system covering the entire lifecycle.

Data governance: Ensure training data is relevant, representative, and free of errors.

Technical documentation: Maintain detailed documentation of how the system works.

Record keeping: Log operations for traceability.

Transparency: Provide information to users about the system’s capabilities and limitations.

Human oversight: Ensure appropriate human oversight of AI decisions.

Accuracy and robustness: Meet appropriate accuracy levels and be resilient to errors.

Cybersecurity: Implement appropriate security measures.

This isn’t a checkbox exercise. It requires genuine capability for documentation, testing, and governance.

Timeline

Key dates:

February 2025: Prohibitions on banned AI practices take effect.

August 2025: Governance requirements and penalties take effect.

August 2026: Full Act requirements apply to high-risk systems.

The clock is ticking. If you have high-risk AI applications, 18 months isn’t long to achieve compliance.

Penalties

Non-compliance penalties scale with company size:

  • Banned AI practices: Up to €35 million or 7% of global turnover
  • High-risk non-compliance: Up to €15 million or 3% of global turnover
  • Incorrect information to regulators: Up to €7.5 million or 1% of global turnover

For SMEs and startups, penalties are capped at the lower figures. For large enterprises, these are substantial numbers.

Practical Steps for Australian Businesses

If you might be in scope:

1. Inventory Your AI

Document all AI systems in use:

  • What AI do you use?
  • What does it do?
  • What data does it process?
  • Does it affect EU residents or markets?

Many organisations discover AI in unexpected places during this exercise.

2. Classify by Risk

For each AI system, determine:

  • Is it in a prohibited category?
  • Is it high-risk?
  • Does it require transparency disclosures?

Get legal input here. Classification isn’t always obvious.

3. Gap Assessment

For high-risk systems, assess current state against requirements:

  • Do you have adequate documentation?
  • Is training data appropriate and documented?
  • Do you have risk management processes?
  • Is human oversight adequate?

Most organisations will find gaps.

4. Remediation Planning

For each gap, develop a remediation plan:

  • What needs to change?
  • Who’s responsible?
  • What’s the timeline?
  • What’s the cost?

Build this into your 2025 planning and budget.

5. Vendor Assessment

If you use third-party AI:

  • Are your vendors aware of EU AI Act requirements?
  • Will they provide compliance documentation?
  • What’s their timeline for compliance?
  • Do contracts address compliance responsibilities?

You can’t outsource compliance responsibility entirely.

The Opportunity

Compliance isn’t just cost. The EU AI Act requirements – documentation, testing, governance – are things well-run AI programs should do anyway.

Organisations that build these capabilities gain:

  • Better understanding of their AI systems
  • Reduced operational risk
  • Improved AI quality through better governance
  • Competitive advantage in regulated markets
  • Preparation for likely similar regulations elsewhere

Australia doesn’t have equivalent AI regulation yet, but it’s coming. The Australian Government’s Department of Industry has been consulting on AI governance frameworks. Getting ahead of EU requirements positions you for whatever follows.

What to Do Now

Immediate actions:

  1. Assign ownership: Someone needs to be responsible for AI Act compliance.

  2. Begin inventory: Start documenting AI use across the organisation.

  3. Assess EU exposure: Understand your EU market and data connections.

  4. Engage legal: Get qualified legal advice on your specific situation.

  5. Communicate: Ensure leadership understands the requirements and implications.

The Act is complex, and this overview doesn’t cover everything. Professional advice is essential for compliance.

Final Thought

The EU AI Act represents the first comprehensive AI regulation from a major jurisdiction. Whether you think it’s good policy or bureaucratic overreach, it’s reality.

For Australian businesses with any EU touchpoint, compliance isn’t optional. The 2025-2026 timeline means work needs to start now.

Treat this as an opportunity to build AI governance capabilities that would be valuable regardless of regulation. The organisations that do this well will be better positioned for the AI-regulated future that’s coming.