AI Governance Boards: Making Them Actually Work


Every organisation with serious AI ambitions now has an AI governance board, council, or committee. Most of them are dysfunctional. They either block everything, approve everything, or meet endlessly without deciding anything.

Here’s how to make AI governance actually work.

What AI Governance Should Do

An effective AI governance board has three core functions:

Risk assessment. Evaluating AI initiatives for potential harms – to customers, employees, the organisation, and society. This includes bias, privacy, security, and reputational risks.

Policy development. Establishing standards for responsible AI use. What’s acceptable? What requires review? What’s prohibited?

Decision-making. Approving, modifying, or rejecting AI initiatives based on risk assessment and policy alignment.

Governance should also enable good AI work, not just prevent bad AI work. This is where most boards fail.

The Composition Question

Who should sit on an AI governance board?

Essential members:

  • Senior business leadership (accountable for outcomes)
  • Technology leadership (understands capabilities and limitations)
  • Risk/compliance (regulatory and policy perspective)
  • Legal (liability and regulatory interpretation)
  • Data/privacy officer (data protection expertise)

Important additions based on context:

  • Ethics expertise (academic or professional ethicist)
  • Customer/user representation (often neglected)
  • HR (employee impact considerations)
  • Industry-specific regulatory expertise

Avoid:

  • Boards without technical members (make uninformed decisions)
  • Boards without business members (disconnect from reality)
  • Boards that are too large (slow, diffuse accountability)
  • Boards dominated by single perspectives (legal-heavy boards tend to block everything)

Ideal size: 7-9 members. Enough perspectives, not so many that consensus becomes impossible.

The Process Design

Governance process determines governance outcomes. Design matters.

Tiered Review

Not every AI use needs board review. Implement tiers:

Tier 1 (Self-certified): Low-risk AI use within established guidelines. Examples: using Copilot for email drafting, standard analytics tools. User certifies compliance with policy. No board review.

Tier 2 (Expedited review): Moderate-risk AI with established patterns. Examples: customer-facing chatbots within defined domains, internal automation. Streamlined review process, delegated approval possible.

Tier 3 (Full review): High-risk or novel AI applications. Examples: AI affecting significant decisions, new data types, novel use cases. Full board review required.

Clear criteria for tier assignment prevent arguments about what needs review.

Review Timelines

Governance that takes months to approve anything is governance that gets bypassed. Set service levels:

  • Tier 2 reviews: 2 weeks maximum
  • Tier 3 reviews: 4-6 weeks maximum

If governance can’t meet these timelines, increase capacity or delegation.

Decision Rights

Clarify who can decide what:

  • Full board decides: New policy, Tier 3 initiatives, appeals
  • Chair or delegate decides: Tier 2 initiatives, policy clarification
  • Individual members decide: Guidance questions, tier assignment

Unclear decision rights create bottlenecks.

Common Dysfunction Patterns

Governance boards fail in predictable ways:

The Approval Factory

Symptoms: Everything gets approved. Review is superficial. No initiative has ever been rejected or significantly modified.

Root cause: Board lacks expertise, courage, or incentive to push back. Governance is performative – checking a box rather than managing risk.

Fix: Add members with real authority and expertise. Create accountability for governance failures. Review past approvals for problems that should have been caught.

The Rejection Machine

Symptoms: Nothing gets approved without extensive modification. Projects die in review. Teams stop submitting innovative work.

Root cause: Risk-averse composition. No business outcome accountability. Fear-driven culture.

Fix: Balance composition. Hold governance accountable for enabling good work, not just preventing bad work. Track cycle time and approval rates as governance metrics.

The Debating Society

Symptoms: Meetings discuss endlessly without deciding. Same issues resurface repeatedly. Clear decisions are rare.

Root cause: Unclear decision rights. Too many members. Consensus-seeking culture that can’t handle disagreement.

Fix: Clarify decision authority. Reduce board size. Implement voting if consensus fails. Set decision deadlines.

The Bottleneck

Symptoms: Review backlog grows continuously. Teams wait months for decisions. Board can’t process volume.

Root cause: Under-resourced. Too much requires board attention. Inadequate tiering.

Fix: Expand delegation. Implement stricter tiering. Add resources. Accept that not everything can be governed equally.

Making Governance Enabling

The shift from “governance as obstacle” to “governance as enabler”:

Pre-engagement

Don’t wait for formal review. Offer informal guidance early in project development. Help teams design AI that will pass review rather than reviewing failures.

Clear guidelines

Publish specific, actionable guidelines. “AI must be fair” isn’t actionable. “AI decisions affecting customers must include bias testing using these methods” is actionable.

Worked examples

Provide examples of approved AI applications with annotations explaining why they passed. Teams learn faster from examples than from abstract policy.

Fast feedback loops

When governance identifies issues, provide specific, actionable feedback quickly. Vague concerns that take weeks to articulate are unhelpful.

Celebrate successes

When good AI work passes governance efficiently, highlight it. Governance shouldn’t only be associated with rejection and delay.

Metrics That Matter

Measure governance effectiveness:

Cycle time: How long from submission to decision? Trending up is bad.

Approval rate: Not too high (rubber stamp) or too low (obstruction). 60-80% is reasonable for Tier 3 review.

Post-deployment issues: How often do approved AI systems cause problems? This is the ultimate governance effectiveness metric.

Business perception: Do business leaders see governance as partner or obstacle? Survey regularly.

Coverage: What percentage of AI activity goes through governance? Shadow AI indicates governance failure.

The Role of External Support

When internal governance capability is limited, external support helps:

Advisory members: External experts who serve on governance boards or review cases.

Audit and review: Periodic external assessment of governance effectiveness.

Specialist consultation: Deep expertise for complex cases (bias assessment, regulatory interpretation).

Working with AI consultants Melbourne who understand governance can accelerate maturity, particularly for organisations building governance capability.

Evolution Over Time

Governance should mature:

Year 1: Establish board, basic policies, review processes. Focus on learning.

Year 2: Refine tiering, develop detailed guidelines, build track record.

Year 3+: Governance becomes embedded, efficient, enabling. Focus shifts to continuous improvement.

Don’t expect perfection immediately. Governance is a capability that develops over time.

Final Thought

AI governance boards can be enablers of responsible innovation or bureaucratic obstacles that drive AI underground. The difference is design and culture.

Design governance to enable good work while managing real risks. Staff it with people who understand both technology and business. Measure it on outcomes, not just process compliance.

Good governance makes AI better. Bad governance drives AI into the shadows. The choice is yours.