AI Risk Self-Assessment Questionnaire
Purpose
Periodic maturity and risk assessment questionnaire covering governance, data protection, model security, and operational monitoring of AI systems.
Related Controls
1. Governance Maturity
Assess the maturity of AI governance structures, policies, and oversight mechanisms.
Assessment Information
| Field | Value |
|---|---|
| Assessment Period | [DATE] to [DATE] |
| Assessed By | [ROLE TITLE], [DEPARTMENT] |
| Business Unit | [DEPARTMENT] |
| Assessment Date | [DATE] |
Governance Questions
Rate each item on a scale of 1-5:
- 1 — Ad Hoc: No formal process; reactive and inconsistent
- 2 — Initial: Some awareness; informal processes beginning
- 3 — Defined: Documented policies and procedures in place
- 4 — Managed: Policies enforced, measured, and regularly reviewed
- 5 — Optimized: Continuous improvement with proactive risk management
| # | Question | Score (1-5) | Evidence / Notes |
|---|---|---|---|
| G1 | Does [ORGANIZATION NAME] have a documented AI governance policy approved by senior leadership? | ||
| G2 | Is there a designated AI governance committee or equivalent body with defined authority and meeting cadence? | ||
| G3 | Are AI-related roles and responsibilities clearly defined (AI system owners, data owners, security leads)? | ||
| G4 | Is there a formal AI risk appetite statement approved by the board or executive leadership? | ||
| G5 | Does the organization maintain an inventory of all AI systems in use (including shadow AI)? | ||
| G6 | Is there a formal approval workflow for new AI use cases with risk-based tiering? | ||
| G7 | Are AI-related policies reviewed and updated at least annually? | ||
| G8 | Is AI governance integrated with the organization's overall enterprise risk management framework? | ||
| G9 | Does the organization track and report on AI governance KPIs to executive leadership? | ||
| G10 | Is there mandatory AI awareness training for all personnel with completion tracking? |
Section Score: ___ / 50
Maturity Level Interpretation
| Score Range | Maturity Level | Interpretation |
|---|---|---|
| 10-20 | Ad Hoc | AI governance is largely absent; significant risk exposure |
| 21-30 | Initial | Basic awareness exists but governance is fragmented and inconsistent |
| 31-38 | Defined | Governance framework is documented but enforcement and measurement are developing |
| 39-45 | Managed | Governance is actively enforced, measured, and integrated with enterprise risk |
| 46-50 | Optimized | Governance is proactive, continuously improving, and industry-leading |
2. Data Protection Posture
Assess the maturity of data protection practices specific to AI system usage.
Data Protection Questions
| # | Question | Score (1-5) | Evidence / Notes |
|---|---|---|---|
| D1 | Is there a data classification guide specific to AI system usage that accounts for aggregation and inference risks? | ||
| D2 | Are data classification rules enforced technically (not just by policy) when data is submitted to AI systems? | ||
| D3 | Is there a documented data retention policy for AI prompts, responses, and interaction metadata? | ||
| D4 | Are automated deletion procedures in place for AI interaction data that has exceeded retention periods? | ||
| D5 | Is consent obtained (and recorded) before collecting AI interaction data from customers or external parties? | ||
| D6 | Are data subject rights (access, erasure, portability) fully exercisable for AI interaction data? | ||
| D7 | Are data processing agreements in place with all third-party AI vendors, including prohibitions on data retention and training use? | ||
| D8 | Is data minimization enforced — are personnel trained to strip unnecessary sensitive data before AI submission? | ||
| D9 | Are cross-border data transfer requirements addressed for AI systems processing data in multiple jurisdictions? | ||
| D10 | Is there regular auditing of AI data handling practices (random prompt reviews, access log analysis)? |
Section Score: ___ / 50
Key Risk Indicators
| KRI | Current Value | Threshold | Status |
|---|---|---|---|
| % of AI vendors with executed DPAs | ___% | ≥ 100% | ☐ Pass ☐ Fail |
| Data classification training completion rate | ___% | ≥ 95% | ☐ Pass ☐ Fail |
| AI interaction data deletion backlog (records past retention) | ___ | 0 | ☐ Pass ☐ Fail |
| Data subject requests fulfilled within SLA | ___% | ≥ 100% | ☐ Pass ☐ Fail |
| Shadow AI discovery findings (last quarter) | ___ | Trending down | ☐ Pass ☐ Fail |
3. Model Security
Assess the security posture of AI models in use across the organization.
Model Security Questions
| # | Question | Score (1-5) | Evidence / Notes |
|---|---|---|---|
| M1 | Are prompt injection defenses (input validation, output filtering, system prompt hardening) implemented on all AI systems accepting user input? | ||
| M2 | Is there a regular cadence of adversarial testing (red teaming) for AI systems? | ||
| M3 | Are AI model versions tracked, and is there a rollback capability for rapid reversion to a known-good model? | ||
| M4 | Are AI model supply chains secured (verified model provenance, signed weights, dependency scanning)? | ||
| M5 | Are authentication and authorization controls (RBAC, API key management) enforced on all AI endpoints? | ||
| M6 | Are AI system logs (prompts, responses, errors, access) centralized and monitored in the SIEM? | ||
| M7 | Is there a vulnerability management process specific to AI systems (tracking CVEs, model vulnerabilities, prompt injection techniques)? | ||
| M8 | Are AI agent tool permissions governed by least-privilege principles with explicit allowlists? | ||
| M9 | Is there a formal process for evaluating the security of new AI models before deployment? | ||
| M10 | Are bias and fairness assessments conducted before model deployment and on a recurring basis? |
Section Score: ___ / 50
Critical Security Controls Checklist
The following controls are considered non-negotiable for any AI system processing Internal or higher data:
- [ ] Prompt injection defense checklist completed and current
- [ ] Red team exercise completed within the last 90 days
- [ ] All critical and high red team findings remediated
- [ ] Model rollback tested within the last 30 days
- [ ] AI system access reviews completed within the last 90 days
- [ ] Incident response plan tested (tabletop or functional exercise) within the last 6 months
4. Operational Monitoring
Assess the maturity of monitoring, detection, and response capabilities for AI systems.
Operational Monitoring Questions
| # | Question | Score (1-5) | Evidence / Notes |
|---|---|---|---|
| O1 | Are AI system health metrics (latency, error rate, throughput) monitored with defined SLAs and automated alerting? | ||
| O2 | Are AI outputs monitored for quality, accuracy, and safety (drift detection, hallucination detection, harmful content detection)? | ||
| O3 | Is there automated detection of anomalous AI usage patterns (unusual query volumes, off-hours access, atypical data patterns)? | ||
| O4 | Are AI-specific incident response procedures documented, tested, and practiced? | ||
| O5 | Is there a defined escalation path for AI-specific incidents that is distinct from general IT incidents? | ||
| O6 | Are AI system costs monitored with budget alerts and anomaly detection (cost spikes may indicate abuse)? | ||
| O7 | Is model performance (accuracy, fairness metrics) tracked over time with automated drift alerting? | ||
| O8 | Are AI system changes (model updates, configuration changes, prompt modifications) logged and auditable? | ||
| O9 | Is there a shadow AI detection program to identify unauthorized AI tool usage? | ||
| O10 | Are post-incident reviews conducted for all AI incidents with lessons learned integrated into processes? |
Section Score: ___ / 50
Monitoring Coverage Matrix
| AI System | Health Monitoring | Security Monitoring | Quality Monitoring | Cost Monitoring | Coverage Score |
|---|---|---|---|---|---|
| [System 1] | ☐ Yes ☐ No | ☐ Yes ☐ No | ☐ Yes ☐ No | ☐ Yes ☐ No | /4 |
| [System 2] | ☐ Yes ☐ No | ☐ Yes ☐ No | ☐ Yes ☐ No | ☐ Yes ☐ No | /4 |
| [System 3] | ☐ Yes ☐ No | ☐ Yes ☐ No | ☐ Yes ☐ No | ☐ Yes ☐ No | /4 |
Target: 100% of production AI systems have coverage across all four monitoring domains.
5. Scoring & Recommendations
Calculate the overall score and generate prioritized recommendations based on assessment results.
Overall Score
| Domain | Score | Max | Percentage | Maturity Level |
|---|---|---|---|---|
| Governance Maturity | ___ | 50 | ___% | |
| Data Protection Posture | ___ | 50 | ___% | |
| Model Security | ___ | 50 | ___% | |
| Operational Monitoring | ___ | 50 | ___% | |
| Overall | ___ | 200 | ___% |
Maturity Level Determination
| Overall Score | Maturity Level | Description |
|---|---|---|
| 40-80 | Level 1 — Ad Hoc | AI risk management is largely reactive and unstructured. Immediate action required to establish foundational controls. |
| 81-120 | Level 2 — Initial | Basic AI risk awareness exists but practices are inconsistent. Priority: formalize governance and critical security controls. |
| 121-160 | Level 3 — Defined | AI risk management framework is documented and partially implemented. Priority: enforcement, measurement, and gap closure. |
| 161-185 | Level 4 — Managed | AI risk management is actively enforced and measured. Priority: optimization, automation, and proactive risk identification. |
| 186-200 | Level 5 — Optimized | AI risk management is industry-leading with continuous improvement. Priority: maintain leadership, share best practices, prepare for emerging risks. |
Prioritized Recommendations
Based on the assessment results, generate recommendations using this priority framework:
| Priority | Criteria | Timeline |
|---|---|---|
| P1 — Critical | Score of 1 on any question involving data protection, model security, or incident response | 30 days |
| P2 — High | Score of 2 on any question, or score of 1 on governance/monitoring questions | 60 days |
| P3 — Medium | Score of 3 on any question | 90 days |
| P4 — Low | Scores of 4 with identified improvement opportunities | Next review cycle |
Recommendation Register
| # | Domain | Question Ref | Current Score | Target Score | Recommendation | Priority | Owner | Due Date |
|---|---|---|---|---|---|---|---|---|
| 1 | [ROLE TITLE] | [DATE] | ||||||
| 2 | [ROLE TITLE] | [DATE] | ||||||
| 3 | [ROLE TITLE] | [DATE] |
Sign-Off
| Role | Name | Signature | Date |
|---|---|---|---|
| Assessor | [DATE] | ||
| AI Governance Committee Chair | [DATE] | ||
| CISO | [DATE] | ||
| Business Unit Leader | [DATE] |
Next Assessment Due: [DATE] (12 months from this assessment, or sooner if triggered by significant AI incident or organizational change)