AI Security Checklist for Enterprise Deployments


Security teams are increasingly asked to assess AI deployments. But AI security has unique characteristics that traditional security frameworks don’t fully address.

This checklist provides a practical starting point for enterprise AI security assessment.

Data Security

Data Input Controls

[ ] Sensitive data identification

  • What categories of sensitive data will the AI system process?
  • Is there classification in place for AI inputs?
  • Are there controls preventing inappropriate data from entering AI systems?

[ ] Data minimisation

  • Is the AI receiving only the data it needs?
  • Are there filters removing unnecessary sensitive information before AI processing?
  • Is data anonymised or pseudonymised where possible?

[ ] Input validation

  • Are inputs validated before AI processing?
  • Are there protections against prompt injection attacks?
  • Is there filtering for malicious content in inputs?

Data Processing Controls

[ ] Processing location

  • Where is AI processing physically occurring?
  • Does processing location meet data residency requirements?
  • Is data transmitted securely to processing locations?

[ ] Vendor data handling

  • How do AI vendors handle your data during processing?
  • Is your data used for model training? Can this be disabled?
  • What data retention policies apply?

[ ] Encryption

  • Is data encrypted in transit to AI systems?
  • Is data encrypted at rest where stored?
  • Are encryption keys appropriately managed?

Data Output Controls

[ ] Output classification

  • Are AI outputs assessed for sensitivity before distribution?
  • Could AI outputs reveal sensitive input information?
  • Are there controls on who can access AI outputs?

[ ] Logging and audit

  • Are AI interactions logged appropriately?
  • Can you reconstruct what data the AI processed?
  • Are logs protected from tampering?

Access Security

Authentication

[ ] User authentication

  • Is strong authentication required for AI system access?
  • Is MFA enabled?
  • Are service accounts properly secured?

[ ] API authentication

  • Are AI APIs protected with appropriate authentication?
  • Are API keys rotated regularly?
  • Are there rate limits to prevent abuse?

Authorisation

[ ] Role-based access

  • Are AI capabilities assigned based on role requirements?
  • Is there granular control over who can use which AI features?
  • Are access rights regularly reviewed?

[ ] Privileged access

  • Who can modify AI system configurations?
  • Are administrative actions logged?
  • Is privileged access appropriately limited?

Model Security

Model Integrity

[ ] Model provenance

  • Where did the model come from?
  • Is the model source trustworthy?
  • Can you verify model integrity?

[ ] Model updates

  • How are model updates validated before deployment?
  • Is there rollback capability if updates cause problems?
  • Are update processes documented and controlled?

Model Behaviour

[ ] Output validation

  • Are AI outputs validated before taking action?
  • Are there guardrails preventing harmful outputs?
  • Is there human review for high-stakes decisions?

[ ] Monitoring for drift

  • Is model behaviour monitored for unexpected changes?
  • Are there alerts for anomalous outputs?
  • Is there process for investigating model behaviour issues?

Infrastructure Security

Platform Security

[ ] Cloud security

  • Do AI cloud services meet your security requirements?
  • Are appropriate cloud security controls enabled?
  • Are cloud configurations reviewed regularly?

[ ] Network security

  • Is AI system network access appropriately restricted?
  • Are connections secured and monitored?
  • Are there protections against network-based attacks?

Integration Security

[ ] Integration points

  • What systems does the AI connect to?
  • Are integration points secured appropriately?
  • Can AI be used to access systems beyond intended scope?

[ ] Data flows

  • Are data flows between AI and other systems documented?
  • Are data flows secured?
  • Are unnecessary data flows prevented?

Governance and Compliance

Policies and Procedures

[ ] AI security policy

  • Is there a documented AI security policy?
  • Does it address AI-specific security considerations?
  • Is it communicated and enforced?

[ ] Incident response

  • Does incident response cover AI-specific incidents?
  • Are AI-related security events defined?
  • Is there capability to respond to AI security incidents?

Compliance

[ ] Regulatory requirements

  • What regulations apply to your AI usage?
  • Are AI-specific compliance requirements understood?
  • Is compliance being monitored and verified?

[ ] Audit capability

  • Can you demonstrate AI security controls to auditors?
  • Is documentation sufficient for compliance evidence?
  • Are audit trails complete?

Threat-Specific Considerations

Prompt Injection

[ ] Input sanitisation

  • Are inputs checked for prompt injection patterns?
  • Is there separation between system instructions and user inputs?
  • Are there tests for prompt injection vulnerabilities?

Data Poisoning

[ ] Training data controls (if training/fine-tuning)

  • Is training data validated for integrity?
  • Are there controls against malicious training data?
  • Is training data provenance tracked?

Model Extraction

[ ] API controls

  • Are there controls preventing model extraction through extensive querying?
  • Are unusual query patterns detected?
  • Are there rate limits that would prevent extraction attempts?

Adversarial Inputs

[ ] Robustness testing

  • Has the AI been tested against adversarial inputs?
  • Are there known vulnerabilities to adversarial examples?
  • Are there mitigations in place?

Implementation Notes

Prioritisation: Not all checklist items carry equal weight. Focus first on:

  1. Sensitive data protection
  2. Access controls
  3. Output validation
  4. Audit logging

Risk-based approach: Apply controls proportional to risk. High-risk AI deployments (customer-facing, decision-making) need more rigorous security than low-risk internal tools.

Continuous assessment: Security isn’t one-time. Regular reassessment is necessary as AI systems evolve and threats change.

Collaboration: AI security requires collaboration between security, AI, and business teams. No single team has complete perspective.

Final Thought

AI security isn’t fundamentally different from other security domains, but it has unique considerations that generic frameworks miss. This checklist provides a starting point.

Adapt it to your specific context, integrate it with your broader security program, and keep updating as the AI threat landscape evolves.

Security done well enables AI adoption. Security done poorly creates incidents that set AI programs back. Take it seriously from the start.