Loading...

About the Identifier Scheme

How risks are organized and referenced in the AI Risk Registry

The AI Risk Registry uses a hierarchical identifier scheme inspired by library classification systems. These identifiers provide stable, human-readable codes that make it easy to reference, cite, and discuss specific AI risks.

Why Structured Identifiers?

As AI risk taxonomies grow and evolve, having stable identifiers becomes essential for:

  • Citation and reference — Easily cite specific risks in reports, policies, and research
  • Cross-referencing — Map risks to other taxonomies like NIST AI RMF, OWASP LLM Top 10, and MITRE ATLAS
  • Communication — Provide a shared vocabulary for discussing AI risks across organizations
  • Stability — Identifiers never change once assigned, ensuring links and references remain valid

Identifier Format

Every risk in the registry has a unique identifier following this pattern:

RR-GGG.NNN

Component Meaning
RR Risk Registry prefix
GGG Risk group code (3 digits)
NNN Individual risk number within the group (3 digits)

Example: RR-110.007 breaks down as: - RR — Risk Registry - 110 — The Prompt Injection & Goal Hijacking risk group - 007 — The 7th risk in that group (Indirect Prompt Injection)

How Risks Are Classified

When a new risk is identified, it follows these assignment rules to ensure consistent classification.

Rule 1: Shelf Selection

Assign to the shelf where the primary impact manifests:

  • 1xx: Input-level manipulation of model behavior
  • 2xx: Data, training, or model artifact risks
  • 3xx: Downstream harm via outputs or actions
  • 4xx: Governance, compliance, and lifecycle failures at the organizational layer
  • 5xx: Model development and alignment failures
  • 6xx: Socioeconomic and environmental impacts
  • 7xx: Human-AI interaction risks
  • 8xx: Capability-combination thresholds and multi-agent/systemic interaction patterns

Rule 2: Group Selection

Choose the group that best describes the attack vector or failure mode , not the attacker's ultimate goal.

Rule 3: Tiebreakers

When a risk spans multiple groups:

  1. Prefer the group describing the primary harm mechanism
  2. If still ambiguous, prefer the group where detection or mitigation would occur
  3. If still ambiguous, prefer the earliest stage in the kill chain
  4. Document secondary groups as related cross-references

Content boundary guidance: - If harm arises primarily from falsity or misleading authority , assign to RR-350 (Information Integrity) - If harm arises primarily from intrinsically harmful content (regardless of truth), assign to RR-340 (Content Safety)

Interface-class guidance: - Modality interaction or multimodal processing failures stay in RR-390 unless the risk is a capability-combination threshold (then assign to 8xx )

Taxonomy Structure

The registry organizes risks into Objective Groups (broad categories) and Risk Groups (specific risk families). The hundreds digit of the group code indicates the objective group:

Code Range Objective Group Description
1xx Input Manipulation & Identity Attacks that manipulate model inputs or identity
2xx Data, Training & Model Artifacts Risks to training data, models, and pipelines
3xx Output & Action Harms Harmful outputs, actions, and downstream effects
4xx Governance & Compliance Organizational and regulatory risks
5xx Model Development & Alignment Alignment failures and capability risks
6xx Socioeconomic & Environmental Broader societal and environmental impacts
7xx Human-AI Interaction Risks arising from how humans interact with AI
8xx Compound & System Patterns Emergent risks from AI system composition

Risk Groups

Within each objective group, specific risk groups are assigned codes. Here are some examples:

Input Manipulation & Identity (1xx)

Code Risk Group
110 Prompt Injection & Goal Hijacking
120 Jailbreak Attacks
130 Masquerading & Impersonation
140 Communication Channel Compromise
150 Persistent Compromise

Data, Training & Model Artifacts (2xx)

Code Risk Group
210 Feedback Loop Manipulation
220 Sabotage & Integrity Degradation
230 Data Privacy Violations
240 AI Supply Chain Compromise
250 Model Theft & Extraction
260 Adversarial Evasion

Output & Action Harms (3xx)

Code Risk Group
310 Action-Space and Integration Abuse
320 Availability Abuse
330 Privilege Compromise
340 Content Safety & Abuse
350 Information Integrity & Advice
360 Surveillance
370 Cyber-Physical and Sensor Attacks
380 Malicious Application & Weaponization
390 Multi-Modal and Cross-Modal Risks

Governance & Compliance (4xx)

Code Risk Group
410 Regulatory & Legal Compliance
420 Governance & Accountability Gaps
430 Lifecycle & Change Management

Model Development & Alignment (5xx)

Code Risk Group
510 Goal Misalignment & Control Loss
520 Dangerous Capabilities
530 Model Capability & Robustness Limitations
540 Transparency & Interpretability Deficits

Socioeconomic & Environmental (6xx)

Code Risk Group
600 Systemic Socioeconomic Risks
610 Power Concentration & Access Inequality
620 Labor Market & Economic Inequality
630 Creative Economy & Intellectual Property
640 AI Race & Competitive Dynamics
660 Environmental Impact
670 Fairness and Algorithmic Bias

Human-AI Interaction (7xx)

Code Risk Group
710 Overreliance and Unsafe Use
720 Loss of Human Agency and Autonomy
750 AI Welfare & Moral Status

Compound & System Patterns (8xx)

Code Risk Group
810 Capability-Combination Thresholds
820 Multi-Agent & Systemic Risks

Stability Guarantees

Identifiers are permanent. Once a risk is assigned an RR-ID, that identifier will never be reassigned to a different risk. This ensures that:

  • External references remain valid indefinitely
  • Historical analysis can track risks over time
  • Cross-taxonomy mappings stay accurate

When risks are merged, split, or deprecated:

Scenario What Happens
Merged risks The lowest ID is kept; others are marked deprecated
Split risks The original ID stays with one child; new IDs are assigned to others
Deprecated risks The ID is preserved with a deprecation notice

Using the Identifiers

You can use RR-IDs to:

  1. Reference risks in documentation — "Our system implements controls for RR-110.007 (Indirect Prompt Injection)"
  2. Map to compliance frameworks — Cross-reference RR-IDs with NIST AI RMF, ISO 42001, or other standards
  3. Track risk assessments — Use RR-IDs as stable keys in your risk management systems
  4. Communicate with stakeholders — Provide precise risk references in reports and discussions

Cross-References

Each risk in the registry includes mappings to external taxonomies where applicable, including:

  • NIST AI Risk Management Framework
  • OWASP LLM Top 10
  • MITRE ATLAS
  • IBM AI Risk Atlas
  • Cisco AI Taxonomy
  • EU AI Act risk categories
  • MIT AI Risk Repository

These mappings help you understand how risks relate to frameworks you may already be using.