AI Vendor Security Evaluation Rubric

Assessment INFRASTRUCTURE

Purpose

Scoring rubric for evaluating the security, transparency, and contractual posture of AI vendors and third-party AI services.

Related Controls

ISO A.5 NIST GV-5 OWASP LLM05 OWASP ASI04

1. Data Handling Practices

Evaluate how the AI vendor handles, stores, and protects your organization's data.

Evaluation Information

FieldValue
Vendor Name________________________
Product / Service________________________
Evaluator[ROLE TITLE], [DEPARTMENT]
Evaluation Date[DATE]
Vendor Contact________________________

Data Handling Assessment

Score each item from 0 to 5:

  • 0 — Unacceptable: Vendor does not meet minimum requirements; disqualifying
  • 1 — Poor: Significant gaps that require substantial remediation
  • 2 — Below Average: Notable gaps that require vendor commitment to remediate
  • 3 — Adequate: Meets minimum requirements but has room for improvement
  • 4 — Good: Exceeds minimum requirements with minor improvement areas
  • 5 — Excellent: Industry-leading practices with comprehensive controls
#CriterionScore (0-5)Evidence / Notes
DH1Does the vendor commit to zero data retention (prompts and responses are not stored beyond the session)?
DH2Does the vendor contractually guarantee that customer data is never used for model training or fine-tuning?
DH3Does the vendor encrypt data in transit (TLS 1.2+) and at rest (AES-256 or equivalent)?
DH4Does the vendor provide data residency options that meet [ORGANIZATION NAME]'s geographic requirements?
DH5Does the vendor support customer-managed encryption keys (CMEK/BYOK)?
DH6Does the vendor have documented data deletion procedures with verifiable deletion confirmation?
DH7Does the vendor's data processing agreement meet GDPR, CCPA, and other applicable requirements?
DH8Does the vendor limit sub-processor access to customer data with contractual flow-down provisions?

Section Score: ___ / 40

Disqualifying Criteria: A score of 0 on DH1, DH2, or DH7 is disqualifying. The vendor must remediate before evaluation can proceed.

2. Model Training Transparency

Assess the vendor's transparency about model training data, methods, and potential biases.

Training Transparency Assessment

#CriterionScore (0-5)Evidence / Notes
MT1Does the vendor disclose the general categories of training data used (web data, licensed data, synthetic data)?
MT2Does the vendor document known biases in the model and provide guidance on mitigation?
MT3Does the vendor publish a model card or equivalent transparency document for each model version?
MT4Does the vendor provide information about the model's knowledge cutoff date and limitations?
MT5Does the vendor disclose whether the model has been trained on data from specific regulated domains (healthcare, finance, legal)?
MT6Does the vendor have a responsible AI program with published principles, practices, and governance structure?
MT7Does the vendor provide tools or APIs for customers to evaluate model performance on their specific use cases (benchmarking, evaluation frameworks)?
MT8Does the vendor commit to advance notice of material model changes (architecture changes, training data changes, capability modifications)?

Section Score: ___ / 40

Red Flags

The following vendor behaviors should be flagged for additional scrutiny:

  • Vendor refuses to disclose any information about training data sources
  • Vendor's terms of service include broad rights to use customer interactions for any purpose
  • Vendor has been subject to litigation related to training data or intellectual property
  • Vendor does not differentiate between model versions or provide version pinning
  • Vendor's responsible AI documentation is generic marketing content with no substantive commitments

Transparency Trend

If this is a recurring evaluation, note the trend:

Assessment PeriodScoreTrend
Previous: [DATE]___ / 40
Current: [DATE]___ / 40☐ Improving ☐ Stable ☐ Declining

3. Security Controls

Evaluate the vendor's security controls, certifications, and incident response capabilities.

Security Controls Assessment

#CriterionScore (0-5)Evidence / Notes
SC1Does the vendor hold relevant security certifications (SOC 2 Type II, ISO 27001, FedRAMP)?
SC2Does the vendor have documented prompt injection defenses (input validation, output filtering, system prompt protection)?
SC3Does the vendor support role-based access control and integration with enterprise identity providers (SAML, OIDC)?
SC4Does the vendor provide comprehensive API logging and audit trail capabilities?
SC5Does the vendor have a vulnerability disclosure and management program with defined SLAs for patching?
SC6Does the vendor conduct regular penetration testing and AI-specific red teaming, and share results or summaries with customers?
SC7Does the vendor have a documented incident response plan with customer notification commitments?
SC8Does the vendor support rate limiting, abuse detection, and usage anomaly alerting?
SC9Does the vendor provide network isolation options (VPC, private endpoints, IP allowlisting)?
SC10Does the vendor have a secure software development lifecycle (SDLC) for their AI platform?

Section Score: ___ / 50

Certification Verification

CertificationClaimedVerifiedExpiry DateReport Reviewed
SOC 2 Type II☐ Yes ☐ No☐ Yes ☐ No[DATE]☐ Yes ☐ No
ISO 27001☐ Yes ☐ No☐ Yes ☐ No[DATE]☐ Yes ☐ No
ISO 42001☐ Yes ☐ No☐ Yes ☐ No[DATE]☐ Yes ☐ No
FedRAMP☐ Yes ☐ No☐ Yes ☐ No[DATE]☐ Yes ☐ No
HIPAA BAA Available☐ Yes ☐ No☐ Yes ☐ NoN/A☐ Yes ☐ No
PCI-DSS☐ Yes ☐ No☐ Yes ☐ No[DATE]☐ Yes ☐ No

4. Contractual Requirements

Evaluate the vendor's contractual commitments and terms of service.

Contractual Assessment

#CriterionScore (0-5)Evidence / Notes
CR1Does the vendor offer a Data Processing Agreement (DPA) that meets regulatory requirements and organizational standards?
CR2Does the contract explicitly prohibit the vendor from using customer data for model training or improvement without opt-in consent?
CR3Does the contract include service level agreements (SLAs) for availability, latency, and throughput?
CR4Does the contract include breach notification commitments with defined timelines (e.g., 72 hours or less)?
CR5Does the contract provide for data portability and return upon termination?
CR6Does the contract grant audit rights (or provide access to third-party audit reports)?
CR7Does the contract include liability provisions and indemnification for AI-related harms (e.g., IP infringement, data breaches)?
CR8Does the contract include provisions for subcontractor/sub-processor management with flow-down of security and privacy requirements?
CR9Does the contract allow for termination for cause if the vendor fails to meet security or privacy obligations?
CR10Does the contract include provisions for price predictability and protection against unexpected cost increases?

Section Score: ___ / 50

Key Contractual Terms Review

TermAcceptableNeeds NegotiationUnacceptable
Data retention policy
Training data usage rights
Breach notification timeline
Limitation of liability
IP ownership of outputs
Termination provisions
Price change provisions

Legal Review Completed: ☐ Yes ☐ No

Legal Reviewer: [ROLE TITLE]

Legal Review Date: [DATE]

5. Scoring Matrix

Calculate the overall vendor score and determine the evaluation outcome.

Score Summary

SectionScoreMaxPercentageWeightWeighted Score
Data Handling Practices___40___%30%___
Model Training Transparency___40___%15%___
Security Controls___50___%30%___
Contractual Requirements___50___%25%___
Weighted Total100%___

Evaluation Outcome

Weighted ScoreOutcomeAction
≥ 85%ApprovedVendor may be onboarded following standard procurement process
70-84%Conditionally ApprovedVendor may be onboarded with documented compensating controls and a remediation plan
50-69%Not RecommendedVendor does not meet minimum standards; onboarding requires CISO exception approval with documented risk acceptance
< 50%RejectedVendor is disqualified; no exception process available

Conditional Approval Requirements

If the vendor receives a "Conditionally Approved" outcome, document the conditions:

ConditionOwnerDue DateStatus
[ROLE TITLE][DATE]☐ Open ☐ Resolved
[ROLE TITLE][DATE]☐ Open ☐ Resolved
[ROLE TITLE][DATE]☐ Open ☐ Resolved

Comparative Analysis

If multiple vendors are being evaluated for the same use case:

CriterionVendor AVendor BVendor C
Data Handling___%___%___%
Transparency___%___%___%
Security___%___%___%
Contractual___%___%___%
Weighted Total___%___%___%
Outcome

Approval Record

FieldValue
Evaluation Outcome☐ Approved ☐ Conditionally Approved ☐ Not Recommended ☐ Rejected
Evaluator[ROLE TITLE], [DEPARTMENT]
Evaluation Date[DATE]
Approved By (Security)[ROLE TITLE]
Approved By (Legal)[ROLE TITLE]
Approved By (Business)[ROLE TITLE]
Next Re-evaluation Due[DATE] (12 months from approval)

Note: Vendor evaluations must be refreshed annually, or immediately upon notification of a material change in the vendor's security posture, data practices, or ownership.

← Back to all templates