Quick Reference Guide

How to Safely Implement AI in Your Organization

A practical, step-by-step roadmap for deploying AI responsibly. Each phase maps to Fairfax International framework domains and links to specific controls you can implement today.

Phase 1: Establish Governance Phase 2: Protect Your Data Phase 3: Secure Your Models Phase 4: Control Your Applications Phase 5: Test and Monitor Phase 6: Manage Infrastructure

Where to Start

You do not need to implement all six phases simultaneously. Start where your risk is highest.

Week 1 Foundation
  • Publish your AI Acceptable Use Policy (AI-GOV-01)
  • Inventory all AI tools currently in use (AI-INF-01)
  • Classify data permitted for AI submission (AI-DAT-01)
Month 1 Critical Controls
  • Deploy prompt injection defenses (AI-MOD-01)
  • Implement output sanitization (AI-MOD-02)
  • Set agent permission boundaries (AI-APP-02)
  • Establish AI monitoring baselines (AI-ASR-02)
Quarter 1 Full Program
  • Conduct first AI red team exercise (AI-ASR-01)
  • Complete AI vendor evaluations (AI-INF-02)
  • Implement deployment validation gates (AI-GOV-02)
  • Deploy PII minimization controls (AI-DAT-03)
  • Establish AI incident response procedures (AI-ASR-03)
Ongoing Maturity
  • Conduct periodic risk self-assessments (AI-GOV-03)
  • Review and update acceptable use policy annually
  • Refresh AI asset inventory quarterly
  • Run red team exercises semi-annually
1

Establish Governance

governance

Before deploying any AI system, establish the policies, approval workflows, and risk boundaries that will guide every decision that follows.

Publish an AI Acceptable Use Policy

Define what AI tools are approved, what activities are prohibited, and what data handling rules apply. This is the single source of truth for AI usage.

AI-GOV-01: AI Acceptable Use Policy →

Define Deployment Validation Gates

Require a validation checklist and formal approval before any AI application reaches production. No exceptions.

AI-GOV-02: Deployment Validation →

Conduct AI Risk Self-Assessment

Periodically assess your organization's AI risk posture and maturity level. Identify gaps before regulators or adversaries find them.

AI-GOV-03: AI Risk Self-Assessment →
Mapped frameworks: ISO 42001 Cl.5-6, A.2-A.3 NIST GV-1 to GV-4
2

Protect Your Data

data

AI systems process, retain, and generate data in ways that differ fundamentally from traditional applications. Classify, restrict, and monitor data flows before they become liabilities.

Classify Data Before AI Submission

Establish clear rules for what data employees and systems can submit to AI tools. Distinguish between public, internal, and restricted data.

AI-DAT-01: Data Input Governance →

Manage AI Interaction Retention

Define retention periods and privacy controls for prompts, responses, and conversation logs. Ensure compliance with GDPR, CCPA, and sector-specific regulations.

AI-DAT-02: Interaction Data Privacy →

Minimize PII in AI Outputs

Monitor and mitigate the risk of AI tools inadvertently disclosing or inferring sensitive PII in their responses.

AI-DAT-03: PII Minimization →
Mapped frameworks: ISO 42001 A.8 NIST MP-2, MP-4 OWASP LLM06
3

Secure Your Models

model

AI models are attack surfaces. Prompt injection, output manipulation, cost exhaustion, and system prompt extraction require purpose-built defenses that traditional security controls do not cover.

Defend Against Adversarial Inputs

Implement input validation, filtering, and sanitization to defend against prompt injection and evasion attacks.

AI-MOD-01: Adversarial Input Defense →

Sanitize Model Outputs

Filter and sanitize all AI outputs before they reach users or downstream systems. Prevent execution of malicious content and data leakage.

AI-MOD-02: Output Sanitization →

Restrict Excessive Usage

Implement rate limiting, cost caps, and anomaly detection to prevent Denial of Wallet attacks and model abuse.

AI-MOD-03: Query Restriction & Cost →

Protect System Prompts

Harden system-level instructions against extraction. Never embed credentials or secrets in prompts.

AI-MOD-04: System Prompt Protection →

Mitigate Hallucinations

Implement grounding, retrieval augmentation, and confidence scoring to reduce misinformation in high-stakes use cases.

AI-MOD-05: Output Reliability →
Mapped frameworks: ISO 42001 A.9 NIST AI 600-1 OWASP LLM01, LLM07, LLM10
4

Control Your Applications

application

AI agents, plugins, and integrations extend your attack surface beyond the model itself. Apply transparency, least-privilege, observability, and supply chain governance to every AI-powered application.

Disclose AI Interactions

Inform users when they are interacting with AI. Disclose relevant limitations, confidence levels, and risks.

AI-APP-01: Transparency & Disclosure →

Restrict Agent Permissions

Apply least-privilege principles to AI agents. Restrict what tools, data, and systems they can access autonomously.

AI-APP-02: Agent Permissions →

Log All Agent Actions

Ensure all agent actions, reasoning chains, tool calls, and decisions are logged in an auditable format.

AI-APP-03: Agent Observability →

Govern the AI Supply Chain

Inventory and evaluate all third-party AI components — models, plugins, APIs, and embeddings — before adoption.

AI-APP-04: Supply Chain Security →
Mapped frameworks: ISO 42001 A.5, A.7 NIST GV-6, MG-1 OWASP Agentic Top 10
5

Test and Monitor

assurance

AI systems degrade silently and face evolving threats. Red teaming, continuous monitoring, and AI-specific incident response ensure you detect and respond to problems before they become breaches.

Red Team Your AI Systems

Conduct adversarial testing including jailbreaking, prompt injection, data extraction, and privilege escalation attempts.

AI-ASR-01: Red Teaming →

Monitor Continuously

Deploy monitoring for shadow AI, anomalous usage patterns, policy violations, and behavioral drift across all AI systems.

AI-ASR-02: Continuous Monitoring →

Prepare AI Incident Response

Update your IR plans with AI-specific scenarios: prompt injection attacks, RAG poisoning, model manipulation, and data exfiltration via AI outputs.

AI-ASR-03: Incident Response →
Mapped frameworks: ISO 42001 Cl.9-10 NIST MS-1 to MS-4 OWASP LLM01, LLM04
6

Manage Infrastructure

infrastructure

You cannot govern what you do not know exists. Maintain a complete inventory of AI assets and evaluate every vendor before trusting them with your data.

Inventory All AI Assets

Catalog every AI tool, API, model, and shadow AI asset in use across the organization. Include sanctioned and unsanctioned usage.

AI-INF-01: AI Asset Inventory →

Evaluate AI Vendors

Assess AI vendors for specific risks: data training practices, security posture, contractual protections, and compliance certifications.

AI-INF-02: Vendor Evaluation →
Mapped frameworks: ISO 42001 A.4, A.10 NIST GV-6, MG-4 OWASP LLM05

Dive Deeper

This guide provides the starting path. Each linked control contains full implementation guidance, code examples, evidence requirements, and audit checklists.

Cross-Reference Matrix → Search All Controls →