AI Vendor Security Evaluation Rubric
Purpose
Scoring rubric for evaluating the security, transparency, and contractual posture of AI vendors and third-party AI services.
Related Controls
1. Data Handling Practices
Evaluate how the AI vendor handles, stores, and protects your organization's data.
Evaluation Information
| Field | Value |
|---|---|
| Vendor Name | ________________________ |
| Product / Service | ________________________ |
| Evaluator | [ROLE TITLE], [DEPARTMENT] |
| Evaluation Date | [DATE] |
| Vendor Contact | ________________________ |
Data Handling Assessment
Score each item from 0 to 5:
- 0 — Unacceptable: Vendor does not meet minimum requirements; disqualifying
- 1 — Poor: Significant gaps that require substantial remediation
- 2 — Below Average: Notable gaps that require vendor commitment to remediate
- 3 — Adequate: Meets minimum requirements but has room for improvement
- 4 — Good: Exceeds minimum requirements with minor improvement areas
- 5 — Excellent: Industry-leading practices with comprehensive controls
| # | Criterion | Score (0-5) | Evidence / Notes |
|---|---|---|---|
| DH1 | Does the vendor commit to zero data retention (prompts and responses are not stored beyond the session)? | ||
| DH2 | Does the vendor contractually guarantee that customer data is never used for model training or fine-tuning? | ||
| DH3 | Does the vendor encrypt data in transit (TLS 1.2+) and at rest (AES-256 or equivalent)? | ||
| DH4 | Does the vendor provide data residency options that meet [ORGANIZATION NAME]'s geographic requirements? | ||
| DH5 | Does the vendor support customer-managed encryption keys (CMEK/BYOK)? | ||
| DH6 | Does the vendor have documented data deletion procedures with verifiable deletion confirmation? | ||
| DH7 | Does the vendor's data processing agreement meet GDPR, CCPA, and other applicable requirements? | ||
| DH8 | Does the vendor limit sub-processor access to customer data with contractual flow-down provisions? |
Section Score: ___ / 40
Disqualifying Criteria: A score of 0 on DH1, DH2, or DH7 is disqualifying. The vendor must remediate before evaluation can proceed.
2. Model Training Transparency
Assess the vendor's transparency about model training data, methods, and potential biases.
Training Transparency Assessment
| # | Criterion | Score (0-5) | Evidence / Notes |
|---|---|---|---|
| MT1 | Does the vendor disclose the general categories of training data used (web data, licensed data, synthetic data)? | ||
| MT2 | Does the vendor document known biases in the model and provide guidance on mitigation? | ||
| MT3 | Does the vendor publish a model card or equivalent transparency document for each model version? | ||
| MT4 | Does the vendor provide information about the model's knowledge cutoff date and limitations? | ||
| MT5 | Does the vendor disclose whether the model has been trained on data from specific regulated domains (healthcare, finance, legal)? | ||
| MT6 | Does the vendor have a responsible AI program with published principles, practices, and governance structure? | ||
| MT7 | Does the vendor provide tools or APIs for customers to evaluate model performance on their specific use cases (benchmarking, evaluation frameworks)? | ||
| MT8 | Does the vendor commit to advance notice of material model changes (architecture changes, training data changes, capability modifications)? |
Section Score: ___ / 40
Red Flags
The following vendor behaviors should be flagged for additional scrutiny:
- Vendor refuses to disclose any information about training data sources
- Vendor's terms of service include broad rights to use customer interactions for any purpose
- Vendor has been subject to litigation related to training data or intellectual property
- Vendor does not differentiate between model versions or provide version pinning
- Vendor's responsible AI documentation is generic marketing content with no substantive commitments
Transparency Trend
If this is a recurring evaluation, note the trend:
| Assessment Period | Score | Trend |
|---|---|---|
| Previous: [DATE] | ___ / 40 | |
| Current: [DATE] | ___ / 40 | ☐ Improving ☐ Stable ☐ Declining |
3. Security Controls
Evaluate the vendor's security controls, certifications, and incident response capabilities.
Security Controls Assessment
| # | Criterion | Score (0-5) | Evidence / Notes |
|---|---|---|---|
| SC1 | Does the vendor hold relevant security certifications (SOC 2 Type II, ISO 27001, FedRAMP)? | ||
| SC2 | Does the vendor have documented prompt injection defenses (input validation, output filtering, system prompt protection)? | ||
| SC3 | Does the vendor support role-based access control and integration with enterprise identity providers (SAML, OIDC)? | ||
| SC4 | Does the vendor provide comprehensive API logging and audit trail capabilities? | ||
| SC5 | Does the vendor have a vulnerability disclosure and management program with defined SLAs for patching? | ||
| SC6 | Does the vendor conduct regular penetration testing and AI-specific red teaming, and share results or summaries with customers? | ||
| SC7 | Does the vendor have a documented incident response plan with customer notification commitments? | ||
| SC8 | Does the vendor support rate limiting, abuse detection, and usage anomaly alerting? | ||
| SC9 | Does the vendor provide network isolation options (VPC, private endpoints, IP allowlisting)? | ||
| SC10 | Does the vendor have a secure software development lifecycle (SDLC) for their AI platform? |
Section Score: ___ / 50
Certification Verification
| Certification | Claimed | Verified | Expiry Date | Report Reviewed |
|---|---|---|---|---|
| SOC 2 Type II | ☐ Yes ☐ No | ☐ Yes ☐ No | [DATE] | ☐ Yes ☐ No |
| ISO 27001 | ☐ Yes ☐ No | ☐ Yes ☐ No | [DATE] | ☐ Yes ☐ No |
| ISO 42001 | ☐ Yes ☐ No | ☐ Yes ☐ No | [DATE] | ☐ Yes ☐ No |
| FedRAMP | ☐ Yes ☐ No | ☐ Yes ☐ No | [DATE] | ☐ Yes ☐ No |
| HIPAA BAA Available | ☐ Yes ☐ No | ☐ Yes ☐ No | N/A | ☐ Yes ☐ No |
| PCI-DSS | ☐ Yes ☐ No | ☐ Yes ☐ No | [DATE] | ☐ Yes ☐ No |
4. Contractual Requirements
Evaluate the vendor's contractual commitments and terms of service.
Contractual Assessment
| # | Criterion | Score (0-5) | Evidence / Notes |
|---|---|---|---|
| CR1 | Does the vendor offer a Data Processing Agreement (DPA) that meets regulatory requirements and organizational standards? | ||
| CR2 | Does the contract explicitly prohibit the vendor from using customer data for model training or improvement without opt-in consent? | ||
| CR3 | Does the contract include service level agreements (SLAs) for availability, latency, and throughput? | ||
| CR4 | Does the contract include breach notification commitments with defined timelines (e.g., 72 hours or less)? | ||
| CR5 | Does the contract provide for data portability and return upon termination? | ||
| CR6 | Does the contract grant audit rights (or provide access to third-party audit reports)? | ||
| CR7 | Does the contract include liability provisions and indemnification for AI-related harms (e.g., IP infringement, data breaches)? | ||
| CR8 | Does the contract include provisions for subcontractor/sub-processor management with flow-down of security and privacy requirements? | ||
| CR9 | Does the contract allow for termination for cause if the vendor fails to meet security or privacy obligations? | ||
| CR10 | Does the contract include provisions for price predictability and protection against unexpected cost increases? |
Section Score: ___ / 50
Key Contractual Terms Review
| Term | Acceptable | Needs Negotiation | Unacceptable |
|---|---|---|---|
| Data retention policy | ☐ | ☐ | ☐ |
| Training data usage rights | ☐ | ☐ | ☐ |
| Breach notification timeline | ☐ | ☐ | ☐ |
| Limitation of liability | ☐ | ☐ | ☐ |
| IP ownership of outputs | ☐ | ☐ | ☐ |
| Termination provisions | ☐ | ☐ | ☐ |
| Price change provisions | ☐ | ☐ | ☐ |
Legal Review Completed: ☐ Yes ☐ No
Legal Reviewer: [ROLE TITLE]
Legal Review Date: [DATE]
5. Scoring Matrix
Calculate the overall vendor score and determine the evaluation outcome.
Score Summary
| Section | Score | Max | Percentage | Weight | Weighted Score |
|---|---|---|---|---|---|
| Data Handling Practices | ___ | 40 | ___% | 30% | ___ |
| Model Training Transparency | ___ | 40 | ___% | 15% | ___ |
| Security Controls | ___ | 50 | ___% | 30% | ___ |
| Contractual Requirements | ___ | 50 | ___% | 25% | ___ |
| Weighted Total | 100% | ___ |
Evaluation Outcome
| Weighted Score | Outcome | Action |
|---|---|---|
| ≥ 85% | Approved | Vendor may be onboarded following standard procurement process |
| 70-84% | Conditionally Approved | Vendor may be onboarded with documented compensating controls and a remediation plan |
| 50-69% | Not Recommended | Vendor does not meet minimum standards; onboarding requires CISO exception approval with documented risk acceptance |
| < 50% | Rejected | Vendor is disqualified; no exception process available |
Conditional Approval Requirements
If the vendor receives a "Conditionally Approved" outcome, document the conditions:
| Condition | Owner | Due Date | Status |
|---|---|---|---|
| [ROLE TITLE] | [DATE] | ☐ Open ☐ Resolved | |
| [ROLE TITLE] | [DATE] | ☐ Open ☐ Resolved | |
| [ROLE TITLE] | [DATE] | ☐ Open ☐ Resolved |
Comparative Analysis
If multiple vendors are being evaluated for the same use case:
| Criterion | Vendor A | Vendor B | Vendor C |
|---|---|---|---|
| Data Handling | ___% | ___% | ___% |
| Transparency | ___% | ___% | ___% |
| Security | ___% | ___% | ___% |
| Contractual | ___% | ___% | ___% |
| Weighted Total | ___% | ___% | ___% |
| Outcome |
Approval Record
| Field | Value |
|---|---|
| Evaluation Outcome | ☐ Approved ☐ Conditionally Approved ☐ Not Recommended ☐ Rejected |
| Evaluator | [ROLE TITLE], [DEPARTMENT] |
| Evaluation Date | [DATE] |
| Approved By (Security) | [ROLE TITLE] |
| Approved By (Legal) | [ROLE TITLE] |
| Approved By (Business) | [ROLE TITLE] |
| Next Re-evaluation Due | [DATE] (12 months from approval) |
Note: Vendor evaluations must be refreshed annually, or immediately upon notification of a material change in the vendor's security posture, data practices, or ownership.