AI Risk Self-Assessment Questionnaire

Assessment GOVERNANCE

Purpose

Periodic maturity and risk assessment questionnaire covering governance, data protection, model security, and operational monitoring of AI systems.

Related Controls

ISO A.4 NIST GV-1 NIST GV-3

1. Governance Maturity

Assess the maturity of AI governance structures, policies, and oversight mechanisms.

Assessment Information

FieldValue
Assessment Period[DATE] to [DATE]
Assessed By[ROLE TITLE], [DEPARTMENT]
Business Unit[DEPARTMENT]
Assessment Date[DATE]

Governance Questions

Rate each item on a scale of 1-5:

  • 1 — Ad Hoc: No formal process; reactive and inconsistent
  • 2 — Initial: Some awareness; informal processes beginning
  • 3 — Defined: Documented policies and procedures in place
  • 4 — Managed: Policies enforced, measured, and regularly reviewed
  • 5 — Optimized: Continuous improvement with proactive risk management
#QuestionScore (1-5)Evidence / Notes
G1Does [ORGANIZATION NAME] have a documented AI governance policy approved by senior leadership?
G2Is there a designated AI governance committee or equivalent body with defined authority and meeting cadence?
G3Are AI-related roles and responsibilities clearly defined (AI system owners, data owners, security leads)?
G4Is there a formal AI risk appetite statement approved by the board or executive leadership?
G5Does the organization maintain an inventory of all AI systems in use (including shadow AI)?
G6Is there a formal approval workflow for new AI use cases with risk-based tiering?
G7Are AI-related policies reviewed and updated at least annually?
G8Is AI governance integrated with the organization's overall enterprise risk management framework?
G9Does the organization track and report on AI governance KPIs to executive leadership?
G10Is there mandatory AI awareness training for all personnel with completion tracking?

Section Score: ___ / 50

Maturity Level Interpretation

Score RangeMaturity LevelInterpretation
10-20Ad HocAI governance is largely absent; significant risk exposure
21-30InitialBasic awareness exists but governance is fragmented and inconsistent
31-38DefinedGovernance framework is documented but enforcement and measurement are developing
39-45ManagedGovernance is actively enforced, measured, and integrated with enterprise risk
46-50OptimizedGovernance is proactive, continuously improving, and industry-leading

2. Data Protection Posture

Assess the maturity of data protection practices specific to AI system usage.

Data Protection Questions

#QuestionScore (1-5)Evidence / Notes
D1Is there a data classification guide specific to AI system usage that accounts for aggregation and inference risks?
D2Are data classification rules enforced technically (not just by policy) when data is submitted to AI systems?
D3Is there a documented data retention policy for AI prompts, responses, and interaction metadata?
D4Are automated deletion procedures in place for AI interaction data that has exceeded retention periods?
D5Is consent obtained (and recorded) before collecting AI interaction data from customers or external parties?
D6Are data subject rights (access, erasure, portability) fully exercisable for AI interaction data?
D7Are data processing agreements in place with all third-party AI vendors, including prohibitions on data retention and training use?
D8Is data minimization enforced — are personnel trained to strip unnecessary sensitive data before AI submission?
D9Are cross-border data transfer requirements addressed for AI systems processing data in multiple jurisdictions?
D10Is there regular auditing of AI data handling practices (random prompt reviews, access log analysis)?

Section Score: ___ / 50

Key Risk Indicators

KRICurrent ValueThresholdStatus
% of AI vendors with executed DPAs___%≥ 100%☐ Pass ☐ Fail
Data classification training completion rate___%≥ 95%☐ Pass ☐ Fail
AI interaction data deletion backlog (records past retention)___0☐ Pass ☐ Fail
Data subject requests fulfilled within SLA___%≥ 100%☐ Pass ☐ Fail
Shadow AI discovery findings (last quarter)___Trending down☐ Pass ☐ Fail

3. Model Security

Assess the security posture of AI models in use across the organization.

Model Security Questions

#QuestionScore (1-5)Evidence / Notes
M1Are prompt injection defenses (input validation, output filtering, system prompt hardening) implemented on all AI systems accepting user input?
M2Is there a regular cadence of adversarial testing (red teaming) for AI systems?
M3Are AI model versions tracked, and is there a rollback capability for rapid reversion to a known-good model?
M4Are AI model supply chains secured (verified model provenance, signed weights, dependency scanning)?
M5Are authentication and authorization controls (RBAC, API key management) enforced on all AI endpoints?
M6Are AI system logs (prompts, responses, errors, access) centralized and monitored in the SIEM?
M7Is there a vulnerability management process specific to AI systems (tracking CVEs, model vulnerabilities, prompt injection techniques)?
M8Are AI agent tool permissions governed by least-privilege principles with explicit allowlists?
M9Is there a formal process for evaluating the security of new AI models before deployment?
M10Are bias and fairness assessments conducted before model deployment and on a recurring basis?

Section Score: ___ / 50

Critical Security Controls Checklist

The following controls are considered non-negotiable for any AI system processing Internal or higher data:

  • [ ] Prompt injection defense checklist completed and current
  • [ ] Red team exercise completed within the last 90 days
  • [ ] All critical and high red team findings remediated
  • [ ] Model rollback tested within the last 30 days
  • [ ] AI system access reviews completed within the last 90 days
  • [ ] Incident response plan tested (tabletop or functional exercise) within the last 6 months

4. Operational Monitoring

Assess the maturity of monitoring, detection, and response capabilities for AI systems.

Operational Monitoring Questions

#QuestionScore (1-5)Evidence / Notes
O1Are AI system health metrics (latency, error rate, throughput) monitored with defined SLAs and automated alerting?
O2Are AI outputs monitored for quality, accuracy, and safety (drift detection, hallucination detection, harmful content detection)?
O3Is there automated detection of anomalous AI usage patterns (unusual query volumes, off-hours access, atypical data patterns)?
O4Are AI-specific incident response procedures documented, tested, and practiced?
O5Is there a defined escalation path for AI-specific incidents that is distinct from general IT incidents?
O6Are AI system costs monitored with budget alerts and anomaly detection (cost spikes may indicate abuse)?
O7Is model performance (accuracy, fairness metrics) tracked over time with automated drift alerting?
O8Are AI system changes (model updates, configuration changes, prompt modifications) logged and auditable?
O9Is there a shadow AI detection program to identify unauthorized AI tool usage?
O10Are post-incident reviews conducted for all AI incidents with lessons learned integrated into processes?

Section Score: ___ / 50

Monitoring Coverage Matrix

AI SystemHealth MonitoringSecurity MonitoringQuality MonitoringCost MonitoringCoverage Score
[System 1]☐ Yes ☐ No☐ Yes ☐ No☐ Yes ☐ No☐ Yes ☐ No/4
[System 2]☐ Yes ☐ No☐ Yes ☐ No☐ Yes ☐ No☐ Yes ☐ No/4
[System 3]☐ Yes ☐ No☐ Yes ☐ No☐ Yes ☐ No☐ Yes ☐ No/4

Target: 100% of production AI systems have coverage across all four monitoring domains.

5. Scoring & Recommendations

Calculate the overall score and generate prioritized recommendations based on assessment results.

Overall Score

DomainScoreMaxPercentageMaturity Level
Governance Maturity___50___%
Data Protection Posture___50___%
Model Security___50___%
Operational Monitoring___50___%
Overall___200___%

Maturity Level Determination

Overall ScoreMaturity LevelDescription
40-80Level 1 — Ad HocAI risk management is largely reactive and unstructured. Immediate action required to establish foundational controls.
81-120Level 2 — InitialBasic AI risk awareness exists but practices are inconsistent. Priority: formalize governance and critical security controls.
121-160Level 3 — DefinedAI risk management framework is documented and partially implemented. Priority: enforcement, measurement, and gap closure.
161-185Level 4 — ManagedAI risk management is actively enforced and measured. Priority: optimization, automation, and proactive risk identification.
186-200Level 5 — OptimizedAI risk management is industry-leading with continuous improvement. Priority: maintain leadership, share best practices, prepare for emerging risks.

Prioritized Recommendations

Based on the assessment results, generate recommendations using this priority framework:

PriorityCriteriaTimeline
P1 — CriticalScore of 1 on any question involving data protection, model security, or incident response30 days
P2 — HighScore of 2 on any question, or score of 1 on governance/monitoring questions60 days
P3 — MediumScore of 3 on any question90 days
P4 — LowScores of 4 with identified improvement opportunitiesNext review cycle

Recommendation Register

#DomainQuestion RefCurrent ScoreTarget ScoreRecommendationPriorityOwnerDue Date
1[ROLE TITLE][DATE]
2[ROLE TITLE][DATE]
3[ROLE TITLE][DATE]

Sign-Off

RoleNameSignatureDate
Assessor[DATE]
AI Governance Committee Chair[DATE]
CISO[DATE]
Business Unit Leader[DATE]

Next Assessment Due: [DATE] (12 months from this assessment, or sooner if triggered by significant AI incident or organizational change)

← Back to all templates