Quick Reference Guide
How to Safely Implement AI in Your Organization
A practical, step-by-step roadmap for deploying AI responsibly. Each phase maps to Fairfax International framework domains and links to specific controls you can implement today.
Where to Start
You do not need to implement all six phases simultaneously. Start where your risk is highest.
- Publish your AI Acceptable Use Policy (AI-GOV-01)
- Inventory all AI tools currently in use (AI-INF-01)
- Classify data permitted for AI submission (AI-DAT-01)
- Deploy prompt injection defenses (AI-MOD-01)
- Implement output sanitization (AI-MOD-02)
- Set agent permission boundaries (AI-APP-02)
- Establish AI monitoring baselines (AI-ASR-02)
- Conduct first AI red team exercise (AI-ASR-01)
- Complete AI vendor evaluations (AI-INF-02)
- Implement deployment validation gates (AI-GOV-02)
- Deploy PII minimization controls (AI-DAT-03)
- Establish AI incident response procedures (AI-ASR-03)
- Conduct periodic risk self-assessments (AI-GOV-03)
- Review and update acceptable use policy annually
- Refresh AI asset inventory quarterly
- Run red team exercises semi-annually
Establish Governance
governanceBefore deploying any AI system, establish the policies, approval workflows, and risk boundaries that will guide every decision that follows.
Publish an AI Acceptable Use Policy
Define what AI tools are approved, what activities are prohibited, and what data handling rules apply. This is the single source of truth for AI usage.
Define Deployment Validation Gates
Require a validation checklist and formal approval before any AI application reaches production. No exceptions.
Conduct AI Risk Self-Assessment
Periodically assess your organization's AI risk posture and maturity level. Identify gaps before regulators or adversaries find them.
Protect Your Data
dataAI systems process, retain, and generate data in ways that differ fundamentally from traditional applications. Classify, restrict, and monitor data flows before they become liabilities.
Classify Data Before AI Submission
Establish clear rules for what data employees and systems can submit to AI tools. Distinguish between public, internal, and restricted data.
Manage AI Interaction Retention
Define retention periods and privacy controls for prompts, responses, and conversation logs. Ensure compliance with GDPR, CCPA, and sector-specific regulations.
Minimize PII in AI Outputs
Monitor and mitigate the risk of AI tools inadvertently disclosing or inferring sensitive PII in their responses.
Secure Your Models
modelAI models are attack surfaces. Prompt injection, output manipulation, cost exhaustion, and system prompt extraction require purpose-built defenses that traditional security controls do not cover.
Defend Against Adversarial Inputs
Implement input validation, filtering, and sanitization to defend against prompt injection and evasion attacks.
Sanitize Model Outputs
Filter and sanitize all AI outputs before they reach users or downstream systems. Prevent execution of malicious content and data leakage.
Restrict Excessive Usage
Implement rate limiting, cost caps, and anomaly detection to prevent Denial of Wallet attacks and model abuse.
Protect System Prompts
Harden system-level instructions against extraction. Never embed credentials or secrets in prompts.
Mitigate Hallucinations
Implement grounding, retrieval augmentation, and confidence scoring to reduce misinformation in high-stakes use cases.
Control Your Applications
applicationAI agents, plugins, and integrations extend your attack surface beyond the model itself. Apply transparency, least-privilege, observability, and supply chain governance to every AI-powered application.
Disclose AI Interactions
Inform users when they are interacting with AI. Disclose relevant limitations, confidence levels, and risks.
Restrict Agent Permissions
Apply least-privilege principles to AI agents. Restrict what tools, data, and systems they can access autonomously.
Log All Agent Actions
Ensure all agent actions, reasoning chains, tool calls, and decisions are logged in an auditable format.
Govern the AI Supply Chain
Inventory and evaluate all third-party AI components — models, plugins, APIs, and embeddings — before adoption.
Test and Monitor
assuranceAI systems degrade silently and face evolving threats. Red teaming, continuous monitoring, and AI-specific incident response ensure you detect and respond to problems before they become breaches.
Red Team Your AI Systems
Conduct adversarial testing including jailbreaking, prompt injection, data extraction, and privilege escalation attempts.
Monitor Continuously
Deploy monitoring for shadow AI, anomalous usage patterns, policy violations, and behavioral drift across all AI systems.
Prepare AI Incident Response
Update your IR plans with AI-specific scenarios: prompt injection attacks, RAG poisoning, model manipulation, and data exfiltration via AI outputs.
Manage Infrastructure
infrastructureYou cannot govern what you do not know exists. Maintain a complete inventory of AI assets and evaluate every vendor before trusting them with your data.
Inventory All AI Assets
Catalog every AI tool, API, model, and shadow AI asset in use across the organization. Include sanctioned and unsanctioned usage.
Evaluate AI Vendors
Assess AI vendors for specific risks: data training practices, security posture, contractual protections, and compliance certifications.
Dive Deeper
This guide provides the starting path. Each linked control contains full implementation guidance, code examples, evidence requirements, and audit checklists.