AI Asset Inventory Register
Purpose
Template for cataloging all AI assets across the organization including models, tools, integrations, and shadow AI discovery processes.
Related Controls
1. Asset Categories
Define the categories of AI assets that must be inventoried.
AI Asset Taxonomy
[ORGANIZATION NAME] classifies AI assets into the following categories. Every AI asset must be assigned to exactly one primary category.
Category 1: AI Models
- Definition: Machine learning models, large language models, and other AI algorithms deployed or used by the organization
- Subcategories:
- Proprietary models (trained or fine-tuned by [ORGANIZATION NAME])
- Commercial models (licensed from vendors, e.g., GPT-4, Claude, Gemini)
- Open-source models (downloaded and self-hosted)
- Embedded models (bundled within vendor products, not directly managed)
Category 2: AI Platforms and Tools
- Definition: Software platforms, services, and tools that provide AI capabilities
- Subcategories:
- Enterprise AI platforms (Azure OpenAI, AWS Bedrock, Google Vertex AI)
- AI development tools (code assistants, testing tools, IDE plugins)
- AI-powered SaaS applications (AI features within business applications)
- AI infrastructure (GPU clusters, model serving platforms, vector databases)
Category 3: AI Integrations
- Definition: Connections between AI systems and other organizational systems
- Subcategories:
- API integrations (AI model APIs consumed by applications)
- Data pipeline integrations (data feeds into AI systems, RAG pipelines)
- Agent tool integrations (tools and functions callable by AI agents)
- Webhook and event integrations (AI systems triggered by external events)
Category 4: AI Data Assets
- Definition: Datasets used for training, fine-tuning, evaluation, or retrieval by AI systems
- Subcategories:
- Training datasets
- Fine-tuning datasets
- Evaluation and benchmark datasets
- RAG knowledge bases and vector stores
- Prompt libraries and template repositories
2. Required Fields
Specify the mandatory fields that must be completed for each AI asset in the inventory.
Inventory Record Schema
Every AI asset record must include the following fields. Fields marked with * are mandatory.
Core Fields
| Field | Description | Example | Required |
|---|---|---|---|
| Asset ID* | Unique identifier following naming convention: AI-[CAT]-[SEQ] | AI-MOD-001 | Yes |
| Asset Name* | Human-readable name | "Azure OpenAI GPT-4 — Customer Support" | Yes |
| Category* | Primary category from the taxonomy | AI Model | Yes |
| Subcategory* | Subcategory from the taxonomy | Commercial Model | Yes |
| Description* | Brief description of purpose and function | "GPT-4 model powering the customer support chatbot" | Yes |
| Status* | Current lifecycle status | Active / Staging / Decommissioned / Suspended | Yes |
| Owner* | Individual accountable for the asset | [ROLE TITLE], [DEPARTMENT] | Yes |
| Department* | Business unit that owns or primarily uses the asset | [DEPARTMENT] | Yes |
Technical Fields
| Field | Description | Example | Required |
|---|---|---|---|
| Vendor / Provider | AI vendor or model provider | Microsoft / OpenAI | Yes (for external) |
| Model Version | Specific model version or release | gpt-4-turbo-2024-04-09 | Yes (for models) |
| Deployment Type* | How the asset is deployed | Cloud (Vendor-Hosted) / Cloud (Self-Hosted) / On-Premises / Hybrid | Yes |
| Hosting Location* | Where the asset is physically/logically hosted | US East (Azure), EU West (AWS) | Yes |
| Data Classification* | Highest classification of data processed | Tier 1-4 per Data Classification Guide | Yes |
| API Endpoints | API URLs or service endpoints | https://api.example.com/v1/chat | Yes (if applicable) |
| Authentication Method | How access is authenticated | API Key / OAuth 2.0 / SAML / Service Account | Yes |
Governance Fields
| Field | Description | Example | Required |
|---|---|---|---|
| Risk Level* | Assessed risk level | Low / Medium / High / Critical | Yes |
| Approval Reference | Reference to the use case approval | AIUC-2026-042 | Yes |
| DPA Reference | Data Processing Agreement reference | DPA-MSFT-2026-003 | If vendor-hosted |
| Last Security Review | Date of most recent security assessment | [DATE] | Yes |
| Next Review Due* | Date the next inventory review is due | [DATE] | Yes |
| Regulatory Scope | Applicable regulations | GDPR, HIPAA, PCI-DSS | If applicable |
| Red Team Date | Date of last red team assessment | [DATE] | If user-facing |
3. Shadow AI Discovery
Define the process and tools for discovering unauthorized or unregistered AI usage.
Shadow AI Definition
Shadow AI refers to any AI system, tool, or service used by [ORGANIZATION NAME] personnel that has not been formally approved, registered in the AI asset inventory, and subjected to security and governance review. Shadow AI represents a significant risk because it operates outside established controls.
Discovery Methods
Method 1: Network Traffic Analysis
- Approach: Monitor outbound network traffic for connections to known AI service endpoints
- Tool: CASB (Cloud Access Security Broker), web proxy, or firewall logs
- Frequency: Continuous monitoring with weekly review of new findings
- Target Domains: Monitor for traffic to known AI API endpoints including:
- api.openai.com, api.anthropic.com, generativelanguage.googleapis.com
- api.mistral.ai, api.cohere.ai, api.together.xyz
- Consumer AI interfaces: chat.openai.com, claude.ai, gemini.google.com, copilot.microsoft.com
- AI coding tools: api.github.com/copilot, codeium.com, cursor.sh
Method 2: Endpoint Agent Analysis
- Approach: Scan endpoint agents (EDR/UEM) for installed AI applications, browser extensions, and IDE plugins
- Tool: Endpoint detection and response (EDR) or unified endpoint management (UEM)
- Frequency: Weekly scan with monthly trend analysis
- Target: AI-related browser extensions, desktop applications, IDE plugins not on the approved list
Method 3: Expense and Procurement Review
- Approach: Review expense reports, corporate credit card transactions, and procurement requests for AI-related purchases
- Tool: Expense management system, procurement platform
- Frequency: Monthly review
- Target: Subscriptions to AI services, API credit purchases, AI training or certification not routed through IT
Method 4: Employee Self-Reporting
- Approach: Provide a low-friction mechanism for employees to report AI tools they are using that may not be registered
- Tool: Dedicated form or email alias (e.g., ai-inventory@[ORGANIZATION DOMAIN])
- Frequency: Continuous, with quarterly amnesty campaigns (no-penalty registration period)
- Incentive: No disciplinary action for self-reporting during amnesty periods; focus on bringing tools into governance
Discovery Response Process
When shadow AI is discovered:
- Log: Record the finding in the shadow AI discovery log with: date, discovery method, tool/service identified, users involved, data classification assessment
- Assess: Evaluate the risk level and determine if the AI tool serves a legitimate business need
- Decide: Either (a) bring the tool into governance through the standard approval process or (b) block access and notify affected users with approved alternatives
- Remediate: If sensitive data was processed through the shadow AI tool, initiate incident response procedures per the AI Incident Response Plan
4. Review Cadence
Define how often the inventory is reviewed, validated, and updated.
Review Schedule
| Review Type | Frequency | Responsible | Scope | Output |
|---|---|---|---|---|
| Full inventory validation | Semi-annually | [ROLE TITLE], [DEPARTMENT] | All registered AI assets | Validated inventory + gap report |
| New asset registration review | Weekly | AI Governance Committee designee | Newly registered assets | Approval / rejection / request for additional info |
| Shadow AI discovery review | Monthly | IT Security | Shadow AI discovery findings | Updated inventory + remediation actions |
| Decommissioned asset review | Quarterly | Asset owners | Assets marked for decommission | Confirmed decommission or status update |
| Risk reassessment | Annually | AI Governance Committee | All active AI assets | Updated risk ratings |
| Vendor/model update review | Ongoing (triggered) | Asset owners | Assets affected by vendor changes | Updated technical fields + risk reassessment if material |
Validation Procedures
During each full inventory validation:
- Completeness Check: Compare inventory against shadow AI discovery findings, procurement records, and network monitoring to identify unregistered assets
- Accuracy Check: Asset owners confirm that all fields are current and accurate for each asset in their ownership
- Status Check: Verify that assets marked as "Active" are still in use, and assets no longer in use are marked for decommission
- Compliance Check: Verify that all required governance artifacts (DPAs, approvals, security reviews) are current and not expired
- Risk Check: Reassess risk levels based on changes in the threat landscape, regulatory environment, or organizational risk appetite
Inventory Metrics
| Metric | Target | Tracking |
|---|---|---|
| Inventory completeness (registered vs. discovered) | ≥ 95% | Semi-annually |
| Records with all mandatory fields completed | 100% | Monthly |
| Assets with current security review (within 12 months) | ≥ 95% | Quarterly |
| Assets with valid DPA (where required) | 100% | Quarterly |
| Shadow AI findings resolved within 30 days | ≥ 90% | Monthly |
| Mean time to register a new AI asset | ≤ 5 business days | Monthly |
Change Management
Any material change to a registered AI asset requires an inventory update within 5 business days. Material changes include:
- Model version upgrade or swap
- Change in data classification of processed data
- Change in deployment type or hosting location
- Change in vendor or licensing terms
- Change in user population or use case scope
- Addition or removal of agent tool integrations
Changes to the inventory schema or review procedures require approval from the AI Governance Committee.
Inventory Owner: [ROLE TITLE], [DEPARTMENT]
Last Full Validation: [DATE]
Next Full Validation Due: [DATE]