AI Asset Inventory Register

Register INFRASTRUCTURE

Purpose

Template for cataloging all AI assets across the organization including models, tools, integrations, and shadow AI discovery processes.

Related Controls

ISO A.5 NIST GV-4 OWASP ASI09

1. Asset Categories

Define the categories of AI assets that must be inventoried.

AI Asset Taxonomy

[ORGANIZATION NAME] classifies AI assets into the following categories. Every AI asset must be assigned to exactly one primary category.

Category 1: AI Models

  • Definition: Machine learning models, large language models, and other AI algorithms deployed or used by the organization
  • Subcategories:

- Proprietary models (trained or fine-tuned by [ORGANIZATION NAME])

- Commercial models (licensed from vendors, e.g., GPT-4, Claude, Gemini)

- Open-source models (downloaded and self-hosted)

- Embedded models (bundled within vendor products, not directly managed)

Category 2: AI Platforms and Tools

  • Definition: Software platforms, services, and tools that provide AI capabilities
  • Subcategories:

- Enterprise AI platforms (Azure OpenAI, AWS Bedrock, Google Vertex AI)

- AI development tools (code assistants, testing tools, IDE plugins)

- AI-powered SaaS applications (AI features within business applications)

- AI infrastructure (GPU clusters, model serving platforms, vector databases)

Category 3: AI Integrations

  • Definition: Connections between AI systems and other organizational systems
  • Subcategories:

- API integrations (AI model APIs consumed by applications)

- Data pipeline integrations (data feeds into AI systems, RAG pipelines)

- Agent tool integrations (tools and functions callable by AI agents)

- Webhook and event integrations (AI systems triggered by external events)

Category 4: AI Data Assets

  • Definition: Datasets used for training, fine-tuning, evaluation, or retrieval by AI systems
  • Subcategories:

- Training datasets

- Fine-tuning datasets

- Evaluation and benchmark datasets

- RAG knowledge bases and vector stores

- Prompt libraries and template repositories

2. Required Fields

Specify the mandatory fields that must be completed for each AI asset in the inventory.

Inventory Record Schema

Every AI asset record must include the following fields. Fields marked with * are mandatory.

Core Fields

FieldDescriptionExampleRequired
Asset ID*Unique identifier following naming convention: AI-[CAT]-[SEQ]AI-MOD-001Yes
Asset Name*Human-readable name"Azure OpenAI GPT-4 — Customer Support"Yes
Category*Primary category from the taxonomyAI ModelYes
Subcategory*Subcategory from the taxonomyCommercial ModelYes
Description*Brief description of purpose and function"GPT-4 model powering the customer support chatbot"Yes
Status*Current lifecycle statusActive / Staging / Decommissioned / SuspendedYes
Owner*Individual accountable for the asset[ROLE TITLE], [DEPARTMENT]Yes
Department*Business unit that owns or primarily uses the asset[DEPARTMENT]Yes

Technical Fields

FieldDescriptionExampleRequired
Vendor / ProviderAI vendor or model providerMicrosoft / OpenAIYes (for external)
Model VersionSpecific model version or releasegpt-4-turbo-2024-04-09Yes (for models)
Deployment Type*How the asset is deployedCloud (Vendor-Hosted) / Cloud (Self-Hosted) / On-Premises / HybridYes
Hosting Location*Where the asset is physically/logically hostedUS East (Azure), EU West (AWS)Yes
Data Classification*Highest classification of data processedTier 1-4 per Data Classification GuideYes
API EndpointsAPI URLs or service endpointshttps://api.example.com/v1/chatYes (if applicable)
Authentication MethodHow access is authenticatedAPI Key / OAuth 2.0 / SAML / Service AccountYes

Governance Fields

FieldDescriptionExampleRequired
Risk Level*Assessed risk levelLow / Medium / High / CriticalYes
Approval ReferenceReference to the use case approvalAIUC-2026-042Yes
DPA ReferenceData Processing Agreement referenceDPA-MSFT-2026-003If vendor-hosted
Last Security ReviewDate of most recent security assessment[DATE]Yes
Next Review Due*Date the next inventory review is due[DATE]Yes
Regulatory ScopeApplicable regulationsGDPR, HIPAA, PCI-DSSIf applicable
Red Team DateDate of last red team assessment[DATE]If user-facing

3. Shadow AI Discovery

Define the process and tools for discovering unauthorized or unregistered AI usage.

Shadow AI Definition

Shadow AI refers to any AI system, tool, or service used by [ORGANIZATION NAME] personnel that has not been formally approved, registered in the AI asset inventory, and subjected to security and governance review. Shadow AI represents a significant risk because it operates outside established controls.

Discovery Methods

Method 1: Network Traffic Analysis

  • Approach: Monitor outbound network traffic for connections to known AI service endpoints
  • Tool: CASB (Cloud Access Security Broker), web proxy, or firewall logs
  • Frequency: Continuous monitoring with weekly review of new findings
  • Target Domains: Monitor for traffic to known AI API endpoints including:

- api.openai.com, api.anthropic.com, generativelanguage.googleapis.com

- api.mistral.ai, api.cohere.ai, api.together.xyz

- Consumer AI interfaces: chat.openai.com, claude.ai, gemini.google.com, copilot.microsoft.com

- AI coding tools: api.github.com/copilot, codeium.com, cursor.sh

Method 2: Endpoint Agent Analysis

  • Approach: Scan endpoint agents (EDR/UEM) for installed AI applications, browser extensions, and IDE plugins
  • Tool: Endpoint detection and response (EDR) or unified endpoint management (UEM)
  • Frequency: Weekly scan with monthly trend analysis
  • Target: AI-related browser extensions, desktop applications, IDE plugins not on the approved list

Method 3: Expense and Procurement Review

  • Approach: Review expense reports, corporate credit card transactions, and procurement requests for AI-related purchases
  • Tool: Expense management system, procurement platform
  • Frequency: Monthly review
  • Target: Subscriptions to AI services, API credit purchases, AI training or certification not routed through IT

Method 4: Employee Self-Reporting

  • Approach: Provide a low-friction mechanism for employees to report AI tools they are using that may not be registered
  • Tool: Dedicated form or email alias (e.g., ai-inventory@[ORGANIZATION DOMAIN])
  • Frequency: Continuous, with quarterly amnesty campaigns (no-penalty registration period)
  • Incentive: No disciplinary action for self-reporting during amnesty periods; focus on bringing tools into governance

Discovery Response Process

When shadow AI is discovered:

  1. Log: Record the finding in the shadow AI discovery log with: date, discovery method, tool/service identified, users involved, data classification assessment
  2. Assess: Evaluate the risk level and determine if the AI tool serves a legitimate business need
  3. Decide: Either (a) bring the tool into governance through the standard approval process or (b) block access and notify affected users with approved alternatives
  4. Remediate: If sensitive data was processed through the shadow AI tool, initiate incident response procedures per the AI Incident Response Plan

4. Review Cadence

Define how often the inventory is reviewed, validated, and updated.

Review Schedule

Review TypeFrequencyResponsibleScopeOutput
Full inventory validationSemi-annually[ROLE TITLE], [DEPARTMENT]All registered AI assetsValidated inventory + gap report
New asset registration reviewWeeklyAI Governance Committee designeeNewly registered assetsApproval / rejection / request for additional info
Shadow AI discovery reviewMonthlyIT SecurityShadow AI discovery findingsUpdated inventory + remediation actions
Decommissioned asset reviewQuarterlyAsset ownersAssets marked for decommissionConfirmed decommission or status update
Risk reassessmentAnnuallyAI Governance CommitteeAll active AI assetsUpdated risk ratings
Vendor/model update reviewOngoing (triggered)Asset ownersAssets affected by vendor changesUpdated technical fields + risk reassessment if material

Validation Procedures

During each full inventory validation:

  1. Completeness Check: Compare inventory against shadow AI discovery findings, procurement records, and network monitoring to identify unregistered assets
  2. Accuracy Check: Asset owners confirm that all fields are current and accurate for each asset in their ownership
  3. Status Check: Verify that assets marked as "Active" are still in use, and assets no longer in use are marked for decommission
  4. Compliance Check: Verify that all required governance artifacts (DPAs, approvals, security reviews) are current and not expired
  5. Risk Check: Reassess risk levels based on changes in the threat landscape, regulatory environment, or organizational risk appetite

Inventory Metrics

MetricTargetTracking
Inventory completeness (registered vs. discovered)≥ 95%Semi-annually
Records with all mandatory fields completed100%Monthly
Assets with current security review (within 12 months)≥ 95%Quarterly
Assets with valid DPA (where required)100%Quarterly
Shadow AI findings resolved within 30 days≥ 90%Monthly
Mean time to register a new AI asset≤ 5 business daysMonthly

Change Management

Any material change to a registered AI asset requires an inventory update within 5 business days. Material changes include:

  • Model version upgrade or swap
  • Change in data classification of processed data
  • Change in deployment type or hosting location
  • Change in vendor or licensing terms
  • Change in user population or use case scope
  • Addition or removal of agent tool integrations

Changes to the inventory schema or review procedures require approval from the AI Governance Committee.

Inventory Owner: [ROLE TITLE], [DEPARTMENT]

Last Full Validation: [DATE]

Next Full Validation Due: [DATE]

← Back to all templates