The Embedded AI Governance Challenge


When organisations think about AI governance, they typically focus on AI projects – the custom models, the ChatGPT deployments, the proof-of-concept builds. But increasingly, AI is showing up in software you didn’t build and may not even realise contains AI.

Microsoft Copilot. Salesforce Einstein. ServiceNow AI. Adobe Firefly. Zoom AI Companion. The list grows weekly.

This embedded AI creates governance challenges that most organisations haven’t addressed.

Why Embedded AI Is Different

Custom AI projects have clear ownership. Someone decided to build or deploy them. Someone is accountable for outcomes.

Embedded AI often arrives as a feature toggle in existing software. Nobody made a deliberate AI decision – it was just in the update. Responsibility is diffuse or absent.

This creates problems:

No risk assessment. Custom AI projects typically go through some evaluation. Embedded AI features may activate without review.

Unknown data flows. Where does data go when AI features are enabled? What’s retained? Who has access? Answers aren’t always clear.

Inconsistent governance. One team’s Salesforce instance has Einstein enabled; another doesn’t. No consistent policy exists.

Vendor dependency for controls. Your ability to govern embedded AI depends on what controls the vendor provides. Those controls vary widely.

The Visibility Problem

Before you can govern embedded AI, you need to know it exists.

Steps to gain visibility:

  1. Audit major software vendors. For every enterprise application, identify whether AI features exist and their current status (enabled, disabled, available).

  2. Review feature announcements. Major vendors announce AI features in release notes. Establish a process to review these and assess implications.

  3. Check configurations. AI features may be on by default. Verify actual state, not assumed state.

  4. Survey users. People may be using AI features you’re unaware of. Ask department heads what AI capabilities their teams use.

This audit is tedious but necessary. You can’t govern what you don’t know about.

Governance Framework for Embedded AI

Once you have visibility, apply governance:

Tier 1: Full Review Required

For embedded AI that:

  • Processes customer data
  • Affects regulated decisions
  • Has significant data residency implications
  • Could impact your brand reputation

Requirements:

  • Document the AI capability and its behaviour
  • Assess data flows and retention
  • Evaluate vendor controls
  • Determine whether to enable, disable, or configure
  • Establish monitoring for ongoing compliance

Tier 2: Light Touch Review

For embedded AI that:

  • Operates on internal data only
  • Provides individual productivity features
  • Has limited data exposure
  • Low risk if it misbehaves

Requirements:

  • Document existence of the capability
  • Review default configurations
  • Communicate availability and appropriate use
  • Monitor for issues

Tier 3: Awareness Only

For embedded AI that:

  • Has no meaningful data or decision implications
  • Operates entirely locally
  • Can’t cause significant harm

Requirements:

  • Note existence in inventory
  • No active governance required

Practical Challenges

Vendors Don’t Make This Easy

AI features are often poorly documented. Data handling disclosures are vague. Configuration options are limited.

What you can do:

  • Demand clarity in vendor conversations
  • Include AI governance requirements in contracts
  • Escalate when vendors are unhelpful
  • Consider AI governance capability in vendor selection

The Pace Exceeds Governance Capacity

New AI features arrive faster than governance processes can review them.

What you can do:

  • Establish rapid-review processes for low-risk features
  • Default to “off” for AI features until reviewed
  • Focus governance attention on high-risk embedded AI
  • Accept that some low-risk features will proceed without formal review

Users Enable Features Unilaterally

Admin access lets people enable AI features without central awareness.

What you can do:

  • Monitor configuration changes in enterprise applications
  • Educate administrators about AI governance requirements
  • Build AI review into change management processes
  • Use vendor admin controls to restrict AI feature enablement

The Vendor Conversation

When engaging vendors about embedded AI:

Questions to ask:

  • What AI features are in your product, and how do they work?
  • What data do they process, and where?
  • What retention and deletion policies apply?
  • What configuration options exist?
  • How are we notified of new AI features?
  • What’s your roadmap for AI governance controls?

Contract terms to consider:

  • Notification requirements for new AI features
  • Data handling commitments specific to AI
  • Configuration control requirements
  • Audit rights for AI data handling
  • Right to disable AI features without penalty

Most vendors aren’t prepared for these conversations. That will change as more enterprises ask.

Integration with Broader Governance

Embedded AI governance shouldn’t be separate from your overall AI governance framework.

Integration points:

  • Same principles apply (human oversight, transparency, etc.)
  • Same risk tiering framework
  • Same documentation standards
  • Same monitoring approaches

The difference is process: how embedded AI enters review rather than what the review entails.

A Realistic Goal

Perfect governance of all embedded AI isn’t achievable. The goal is:

  1. Visibility into what embedded AI exists
  2. Risk-based attention focused on high-risk features
  3. Consistent approach across embedded and custom AI
  4. Improving vendor engagement over time

This gets you to “good enough” – embedded AI is known, significant risks are managed, and you’re improving.

Final Thought

Embedded AI is the governance challenge most organisations haven’t fully grasped. It’s increasing rapidly as vendors add AI to everything.

Get ahead of this now. Audit your software estate. Establish governance processes for embedded AI. Engage vendors proactively.

The alternative is discovering problems after embedded AI has already caused them. That’s not a position you want to be in.