AI Risk Register Template: A Practical Framework for Teams
A ready-to-use AI risk register template with pre-filled examples for five common use cases. Includes scoring methodology, review cadence, and a structured format for tracking AI risks across your organization.
You are deploying AI systems in your organization and need a structured way to identify, assess, and track risks. An AI risk register is the standard tool for this job.
What is an AI risk register
A risk register is a structured document that catalogs known risks for each AI system your organization uses or builds. It records what can go wrong, how likely it is, how severe the impact would be, what controls are in place, and who is responsible for monitoring.
Unlike a one-time risk assessment, a risk register is a living document. It gets reviewed at regular intervals, updated when systems change, and referenced during procurement, incident response, and compliance audits.
Why teams need one
Organizations deploying AI face risks that traditional software risk management does not cover: hallucination, bias amplification, data leakage through model prompts, regulatory non-compliance under the EU AI Act, and vendor lock-in from proprietary model dependencies. A dedicated AI risk register makes these risks visible and trackable.
The NIST AI Risk Management Framework (AI RMF 1.0) recommends organizations "establish and maintain organizational AI risk profiles." ISO 42001 requires documented risk treatment plans. If your organization operates in the EU, the AI Act mandates risk management systems for high-risk AI applications.
The register format
Each row in the register represents one identified risk for one AI use case. Here are the columns and what they capture:
| Column | Purpose |
|---|---|
| Risk ID | Unique identifier for tracking |
| AI Use Case | The specific system or application |
| Risk Category | Classification (data, reliability, compliance, bias, security) |
| Severity | Impact if the risk materializes (1-5 scale) |
| Likelihood | Probability of occurrence (1-5 scale) |
| Current Controls | Mitigation measures already in place |
| Residual Risk | Risk remaining after controls (Low, Medium, High) |
| Owner | Person or team responsible |
| Review Date | When this risk was last reviewed |
Pre-filled examples
Below are five entries for common AI use cases. Use these as starting points and modify them to match your specific deployment.
| Risk ID | AI Use Case | Risk Category | Severity | Likelihood | Current Controls | Residual Risk | Owner | Review Date |
|---|---|---|---|---|---|---|---|---|
| R-001 | Customer chatbot | Reliability | 4 | 4 | RAG grounding, human escalation, response logging | Medium | Support Lead | 2026-04-01 |
| R-002 | Resume screening | Bias | 5 | 3 | Demographic parity testing, human review for final decisions | High | HR Director | 2026-04-01 |
| R-003 | Content moderation | Data | 3 | 3 | PII redaction pipeline, access controls, audit logging | Medium | Trust & Safety | 2026-04-01 |
| R-004 | Medical triage assistant | Compliance | 5 | 2 | Clinician-in-the-loop, FDA compliance review, output disclaimers | High | Clinical Lead | 2026-04-01 |
| R-005 | Fraud detection | Security | 4 | 3 | Model input validation, adversarial testing, alert thresholds | Medium | Security Team | 2026-04-01 |
R-001: Customer chatbot (hallucination and data leakage)
The chatbot may generate inaccurate product information, fabricate policies, or leak customer data included in the conversation context. Severity is high (4) because incorrect advice can cause financial harm and erode customer trust. Likelihood is high (4) because hallucination is inherent in language models, especially for edge-case questions.
R-002: Resume screening (bias amplification)
The screening tool may disproportionately filter out qualified candidates from protected groups. Severity is critical (5) because employment discrimination carries legal liability, reputational damage, and direct harm to individuals. Likelihood is moderate (3) if the vendor has conducted bias testing, though the test coverage may not match your applicant population.
R-003: Content moderation (data exposure)
User-submitted content processed by the AI model may contain personal information that gets stored in model logs or exposed through API responses. Severity is moderate (3) because the data is already semi-public (user-generated content), but PII within it requires protection.
R-004: Medical triage assistant (regulatory compliance)
An AI triage tool that provides clinical recommendations may cross the line into a regulated medical device. Severity is critical (5) because incorrect triage can delay treatment. Likelihood is low (2) if a clinician reviews every recommendation, but the system creates compliance obligations regardless.
R-005: Fraud detection (adversarial manipulation)
Bad actors may probe the fraud model to learn its detection thresholds and craft transactions that evade detection. Severity is high (4) because successful evasion leads to direct financial loss. Likelihood is moderate (3) because adversarial testing by attackers is common in financial services.
How to score severity and likelihood
Use a consistent 1-5 scale across your organization. Here is a reference framework:
Severity scale
| Score | Label | Description |
|---|---|---|
| 1 | Negligible | Minor inconvenience, no financial or legal impact |
| 2 | Low | Limited impact, easily corrected, no regulatory consequence |
| 3 | Moderate | Noticeable business impact, potential customer complaints, minor compliance issue |
| 4 | High | Significant financial loss, regulatory scrutiny, reputational damage |
| 5 | Critical | Severe harm to individuals, major legal liability, regulatory enforcement action |
Likelihood scale
| Score | Label | Description |
|---|---|---|
| 1 | Rare | Could happen in exceptional circumstances only |
| 2 | Unlikely | Not expected but possible under specific conditions |
| 3 | Possible | Has happened in similar deployments or is a known pattern |
| 4 | Likely | Will probably occur within the review period |
| 5 | Almost certain | Expected to occur regularly without additional controls |
Risk matrix
Multiply severity by likelihood to get a risk score. Map to residual risk levels:
| Risk Score | Residual Risk | Action Required |
|---|---|---|
| 1-4 | Low | Monitor at standard review intervals |
| 5-9 | Medium | Implement additional controls, review quarterly |
| 10-15 | High | Escalate to leadership, implement controls before next review |
| 16-25 | Critical | Pause deployment until controls are implemented |
Review cadence
Different risk levels require different review frequencies:
| Residual Risk | Review Frequency | Trigger for Ad-Hoc Review |
|---|---|---|
| Low | Every six months | System update, incident, or regulatory change |
| Medium | Quarterly | Model version change, new data source, user complaint |
| High | Monthly | Any change to the AI system or its operating context |
| Critical | Weekly until downgraded | Immediate review after any incident |
At each review, update the register with: any new risks identified, changes to severity or likelihood scores, new or modified controls, and reassigned ownership if teams have changed.
How to start
- List every AI system your organization uses or is considering. Include vendor tools, internal models, and AI features embedded in other software.
- For each system, identify at least three risks using the categories: data, reliability, compliance, bias, and security.
- Score each risk using the severity and likelihood scales above.
- Document existing controls. Be specific: "we review 10% of AI decisions" is better than "human oversight."
- Assign an owner to every risk. Unowned risks do not get managed.
- Set a review date based on the residual risk level.
- Store the register where the team can access it. A shared spreadsheet works for small teams. Larger organizations may need a GRC (governance, risk, compliance) tool.
Sources
- [1]NIST AI Risk Management Framework (AI RMF 1.0)Primary Source
- [2]ISO/IEC 42001:2023 - AI Management SystemPrimary Source
- [3]EU AI Act, Regulation 2024/1689Legal Source
- [4]OECD AI PrinciplesIndependent Review
Related
AI Vendor Due Diligence
30 questions in five categories for evaluating AI vendors.
AI Risk Register Template
A structured framework for tracking AI risks across your organization.
Hallucination Risk
Patterns, detection methods, and mitigation strategies.
Data Leakage Risk
Five documented patterns and mitigation strategies for AI data leakage.