AI Acceptable Use Policy

Policy GOVERNANCE

Purpose

Defines approved AI tools, prohibited activities, and data handling rules for all personnel interacting with AI systems.

Related Controls

ISO A.2 NIST GV-1 OWASP LLM09

1. Purpose & Scope

State the policy objective, who it applies to, and which AI systems are covered.

Purpose

This policy establishes binding requirements for the acceptable use of artificial intelligence systems and tools within [ORGANIZATION NAME]. It exists to protect organizational assets, maintain regulatory compliance, and ensure responsible AI adoption.

Scope

This policy applies to:

  • All employees, contractors, consultants, and third-party personnel
  • All AI technologies including large language models (LLMs), machine learning models, AI-powered code generation tools, automated decision systems, and agentic AI architectures
  • Systems developed internally, procured from vendors, or accessed via API
  • Both production and non-production environments
FieldValue
Effective Date[DATE]
Policy Owner[ROLE TITLE], [DEPARTMENT]
ClassificationInternal
Review CycleAnnual, or triggered by significant AI incident
Approved By[ROLE TITLE]

Applicability

This policy applies regardless of whether the AI system is accessed through a web interface, API, embedded integration, browser extension, or command-line tool. Shadow AI — any AI tool adopted without IT or security approval — is explicitly within scope and subject to all provisions herein.

Departments with sector-specific regulatory obligations (e.g., healthcare, financial services, legal) must layer additional controls on top of this policy as required by their regulatory frameworks.

2. Permitted Use Cases

List the AI use cases approved for your organization, categorized by risk level with required approvals.

Pre-Approved Use Cases (Low Risk)

The following AI activities are approved for all personnel who have completed AI awareness training:

  • Code generation assistance with mandatory human review before merge
  • Drafting internal communications, meeting summaries, and documentation
  • Data analysis and visualization on non-regulated, non-confidential datasets
  • Internal knowledge base search and summarization
  • Test case generation and code documentation
  • Translation of internal documents between supported languages

Manager Approval Required (Medium Risk)

The following require documented approval from the individual's direct manager and a completed risk mini-assessment:

  • Customer-facing content generation with editorial review prior to publication
  • AI-assisted data analysis on datasets containing personally identifiable information (PII)
  • Automated report generation for compliance or regulatory purposes
  • AI-powered monitoring, alerting, or triage systems
  • Integration of AI capabilities into internal tools or workflows

Executive and Security Approval Required (High Risk)

The following require approval from both the CISO (or delegate) and a business unit executive:

  • Automated decision-making affecting individuals (hiring, credit, access control, benefits)
  • Processing of regulated data (HIPAA, PCI-DSS, GDPR-covered data)
  • Customer-facing AI chatbots, virtual agents, or autonomous responders
  • AI systems integrated into critical infrastructure or safety-critical processes
  • Fine-tuning or training models on proprietary organizational data

Approval Record

All medium and high-risk approvals must be documented using the AI Use Case Approval Form and retained for a minimum of three years. The AI Governance Committee maintains a register of all approved use cases.

3. Prohibited Activities

List activities that are forbidden under all circumstances, with no exception process.

Absolute Prohibitions

The following activities are strictly prohibited and constitute a policy violation regardless of intent, business justification, or outcome:

  1. Data Exfiltration via AI: Submitting proprietary source code, trade secrets, customer data, credentials, API keys, or confidential business information to any AI service not explicitly approved by the security team
  2. Deceptive Content: Using AI to generate, modify, or distribute misleading, fraudulent, or deceptive content — including deepfakes, fabricated evidence, or impersonation
  3. Security Bypass: Attempting to bypass security controls, content filters, access restrictions, or rate limits on AI systems — whether internal or third-party
  4. Unsupervised Decisions: Using AI outputs as final decisions without human review in any regulated domain, including employment, lending, insurance, healthcare, or law enforcement
  5. Credential Exposure: Sharing API keys, service account credentials, authentication tokens, or session identifiers with AI systems not approved by the security team
  6. Unauthorized Training: Training or fine-tuning models on customer data, employee data, or any regulated dataset without a signed data processing agreement and legal review
  7. Unvalidated Deployment: Deploying AI models to production without completing the AI Deployment Validation Checklist and obtaining all required sign-offs
  8. Surveillance: Using AI for employee surveillance, behavioral profiling, or customer monitoring without explicit legal authorization and privacy impact assessment
  9. Autonomous Weapons or Harm: Using AI to develop, enhance, or deploy any system designed to cause physical harm, enable weapons systems, or facilitate illegal activities

Reporting Violations

Any personnel who become aware of a violation of these prohibitions must report it to [ROLE TITLE] within 24 hours. Reports may be made through the organization's ethics hotline, direct communication with the AI Governance Committee, or the incident reporting system.

Retaliation against individuals who report violations in good faith is itself a policy violation subject to disciplinary action.

4. Data Handling Requirements

Define how data classification levels interact with AI system permissions.

Data Classification Matrix for AI

ClassificationPublic AI ToolsEnterprise AI (Approved)On-Premises AIRequirements
PublicPermittedPermittedPermittedNone
InternalProhibitedPermittedPermittedAudit logging required
ConfidentialProhibitedConditionalPermittedDPA required, no vendor retention, logging mandatory
RestrictedProhibitedProhibitedCase-by-caseCISO + Legal approval, full audit trail

Data Minimization Requirements

All AI interactions must follow the principle of data minimization:

  • Include only the minimum data necessary to accomplish the task
  • Strip PII, credentials, and sensitive identifiers before submitting prompts
  • Use anonymization or pseudonymization techniques when working with datasets
  • Never include full database records when a summary or schema would suffice

Prompt Hygiene

Before submitting any prompt to an AI system, personnel must verify that the prompt does not contain:

  • Social Security numbers, national ID numbers, or government identifiers
  • Financial account numbers, credit card numbers, or banking details
  • Protected health information (PHI) or medical records
  • Authentication credentials, tokens, or cryptographic keys
  • Internal IP addresses, network diagrams, or security configurations

Logging and Retention

All AI interactions involving Internal or higher classification data must be logged in the organization's AI interaction log. Logs must include: timestamp, user identity, AI system used, data classification level, and purpose. Logs must be retained for a minimum of [12/24/36] months per [ORGANIZATION NAME]'s data retention schedule.

5. Approval Workflow

Document the step-by-step process for requesting and obtaining approval for new AI use cases.

Workflow Overview

All new AI use cases — and material changes to approved use cases — must follow this approval workflow before operational deployment.

Step 1: Use Case Registration

The requesting individual or team completes the AI Use Case Registration Form, which captures:

  • Description of the intended AI use case and business justification
  • AI system or tool to be used (vendor, model, version)
  • Data classification of inputs and expected outputs
  • Intended users and access scope
  • Risk level self-assessment (Low / Medium / High)

Step 2: Risk Assessment

The AI Governance Committee (or designated reviewer) evaluates the submission against:

  • Data sensitivity and regulatory requirements
  • Potential for bias, discrimination, or unfair outcomes
  • Reputational risk and customer impact
  • Technical security posture of the AI system
  • Vendor risk profile (for third-party systems)

Step 3: Approval Decision

Risk LevelApproverSLA
LowAuto-approved (if training complete)Immediate
MediumDirect Manager + IT Security Review5 business days
HighCISO + Business Unit Executive + Legal15 business days

Step 4: Conditional Approval

Approvals may include conditions such as: mandatory logging, periodic review, usage limits, restricted user groups, or additional training requirements. All conditions are documented in the approval record and tracked by the AI Governance Committee.

Step 5: Periodic Revalidation

All approved use cases are subject to annual revalidation. Use cases that have not been revalidated within 12 months of approval are automatically suspended until review is completed.

6. Training & Acknowledgment

Specify the training requirements and acknowledgment process for all personnel.

Mandatory Training Requirements

All personnel within scope of this policy must complete the following training before using any AI system:

AI Awareness Training (All Personnel)

  • Duration: 60 minutes (online, self-paced)
  • Frequency: Upon hire and annually thereafter
  • Content: AI basics, organizational policy overview, data handling rules, prohibited activities, incident reporting
  • Assessment: 80% passing score required on post-training quiz

AI Security Training (Technical Staff)

  • Duration: 4 hours (instructor-led or online)
  • Frequency: Upon assignment to AI projects and annually thereafter
  • Content: Prompt injection attacks, data leakage risks, model security, secure deployment practices, red team fundamentals
  • Assessment: Hands-on lab exercise with 90% passing score

AI Governance Training (Managers and Executives)

  • Duration: 2 hours (instructor-led)
  • Frequency: Annually
  • Content: Regulatory landscape, risk management, approval workflows, incident escalation, vendor oversight

Acknowledgment Process

Upon completing training, each individual must sign the AI Acceptable Use Acknowledgment, confirming they have:

  1. Read and understood the AI Acceptable Use Policy in its entirety
  2. Completed the required training for their role
  3. Agreed to comply with all provisions of this policy
  4. Acknowledged that violations may result in disciplinary action up to and including termination

Signed acknowledgments are retained by [DEPARTMENT] for the duration of the individual's engagement with [ORGANIZATION NAME] plus [2/5/7] years.

Training Records

The [DEPARTMENT] maintains a training completion register. Managers receive monthly reports on training compliance for their direct reports. Personnel who have not completed required training within 30 days of their due date will have AI system access suspended.

7. Enforcement & Violations

Describe the consequences for policy violations and the investigation process.

Violation Categories

CategoryDescriptionExamples
MinorUnintentional policy deviation with no data exposureUsing unapproved AI tool for public data, failing to log interaction
ModeratePolicy violation with potential data exposure or riskSubmitting internal data to public AI, bypassing approval workflow
MajorDeliberate violation or confirmed data exposureSubmitting restricted data to AI, disabling security controls, repeated violations
CriticalViolation resulting in regulatory breach or significant harmUnauthorized automated decisions in regulated domain, mass data exfiltration via AI

Consequences

CategoryFirst OffenseRepeat Offense
MinorVerbal warning + remedial trainingWritten warning
ModerateWritten warning + AI access suspension (30 days)Final written warning + extended suspension
MajorFinal written warning + AI access revocation + formal investigationTermination
CriticalImmediate AI access revocation + formal investigation + potential terminationTermination + legal referral

Investigation Process

  1. Detection: Violation identified through monitoring, self-report, or third-party notification
  2. Triage: AI Governance Committee assesses severity within 24 hours
  3. Investigation: [ROLE TITLE] conducts investigation within 10 business days, including interviews, log analysis, and impact assessment
  4. Determination: Findings documented and violation category assigned
  5. Action: Appropriate consequence applied per the matrix above
  6. Remediation: Root cause addressed — may include additional training, process changes, or technical controls
  7. Record: All investigation records retained for [5/7] years

Continuous Monitoring

[ORGANIZATION NAME] reserves the right to monitor AI system usage, review interaction logs, and audit compliance with this policy. Monitoring will be conducted in accordance with applicable privacy laws and employment regulations.

← Back to all templates