ThinkTech
Tool9 min read

AI Risk Register Template: A Practical Framework for Teams

A ready-to-use AI risk register template with pre-filled examples for five common use cases. Includes scoring methodology, review cadence, and a structured format for tracking AI risks across your organization.

By ThinkTech Research|Published April 15, 2026

You are deploying AI systems in your organization and need a structured way to identify, assess, and track risks. An AI risk register is the standard tool for this job.

What is an AI risk register

A risk register is a structured document that catalogs known risks for each AI system your organization uses or builds. It records what can go wrong, how likely it is, how severe the impact would be, what controls are in place, and who is responsible for monitoring.

Unlike a one-time risk assessment, a risk register is a living document. It gets reviewed at regular intervals, updated when systems change, and referenced during procurement, incident response, and compliance audits.

Why teams need one

Organizations deploying AI face risks that traditional software risk management does not cover: hallucination, bias amplification, data leakage through model prompts, regulatory non-compliance under the EU AI Act, and vendor lock-in from proprietary model dependencies. A dedicated AI risk register makes these risks visible and trackable.

The NIST AI Risk Management Framework (AI RMF 1.0) recommends organizations "establish and maintain organizational AI risk profiles." ISO 42001 requires documented risk treatment plans. If your organization operates in the EU, the AI Act mandates risk management systems for high-risk AI applications.

The register format

Each row in the register represents one identified risk for one AI use case. Here are the columns and what they capture:

ColumnPurpose
Risk IDUnique identifier for tracking
AI Use CaseThe specific system or application
Risk CategoryClassification (data, reliability, compliance, bias, security)
SeverityImpact if the risk materializes (1-5 scale)
LikelihoodProbability of occurrence (1-5 scale)
Current ControlsMitigation measures already in place
Residual RiskRisk remaining after controls (Low, Medium, High)
OwnerPerson or team responsible
Review DateWhen this risk was last reviewed

Pre-filled examples

Below are five entries for common AI use cases. Use these as starting points and modify them to match your specific deployment.

Risk IDAI Use CaseRisk CategorySeverityLikelihoodCurrent ControlsResidual RiskOwnerReview Date
R-001Customer chatbotReliability44RAG grounding, human escalation, response loggingMediumSupport Lead2026-04-01
R-002Resume screeningBias53Demographic parity testing, human review for final decisionsHighHR Director2026-04-01
R-003Content moderationData33PII redaction pipeline, access controls, audit loggingMediumTrust & Safety2026-04-01
R-004Medical triage assistantCompliance52Clinician-in-the-loop, FDA compliance review, output disclaimersHighClinical Lead2026-04-01
R-005Fraud detectionSecurity43Model input validation, adversarial testing, alert thresholdsMediumSecurity Team2026-04-01

R-001: Customer chatbot (hallucination and data leakage)

The chatbot may generate inaccurate product information, fabricate policies, or leak customer data included in the conversation context. Severity is high (4) because incorrect advice can cause financial harm and erode customer trust. Likelihood is high (4) because hallucination is inherent in language models, especially for edge-case questions.

R-002: Resume screening (bias amplification)

The screening tool may disproportionately filter out qualified candidates from protected groups. Severity is critical (5) because employment discrimination carries legal liability, reputational damage, and direct harm to individuals. Likelihood is moderate (3) if the vendor has conducted bias testing, though the test coverage may not match your applicant population.

R-003: Content moderation (data exposure)

User-submitted content processed by the AI model may contain personal information that gets stored in model logs or exposed through API responses. Severity is moderate (3) because the data is already semi-public (user-generated content), but PII within it requires protection.

R-004: Medical triage assistant (regulatory compliance)

An AI triage tool that provides clinical recommendations may cross the line into a regulated medical device. Severity is critical (5) because incorrect triage can delay treatment. Likelihood is low (2) if a clinician reviews every recommendation, but the system creates compliance obligations regardless.

R-005: Fraud detection (adversarial manipulation)

Bad actors may probe the fraud model to learn its detection thresholds and craft transactions that evade detection. Severity is high (4) because successful evasion leads to direct financial loss. Likelihood is moderate (3) because adversarial testing by attackers is common in financial services.

How to score severity and likelihood

Use a consistent 1-5 scale across your organization. Here is a reference framework:

Severity scale

ScoreLabelDescription
1NegligibleMinor inconvenience, no financial or legal impact
2LowLimited impact, easily corrected, no regulatory consequence
3ModerateNoticeable business impact, potential customer complaints, minor compliance issue
4HighSignificant financial loss, regulatory scrutiny, reputational damage
5CriticalSevere harm to individuals, major legal liability, regulatory enforcement action

Likelihood scale

ScoreLabelDescription
1RareCould happen in exceptional circumstances only
2UnlikelyNot expected but possible under specific conditions
3PossibleHas happened in similar deployments or is a known pattern
4LikelyWill probably occur within the review period
5Almost certainExpected to occur regularly without additional controls

Risk matrix

Multiply severity by likelihood to get a risk score. Map to residual risk levels:

Risk ScoreResidual RiskAction Required
1-4LowMonitor at standard review intervals
5-9MediumImplement additional controls, review quarterly
10-15HighEscalate to leadership, implement controls before next review
16-25CriticalPause deployment until controls are implemented

Review cadence

Different risk levels require different review frequencies:

Residual RiskReview FrequencyTrigger for Ad-Hoc Review
LowEvery six monthsSystem update, incident, or regulatory change
MediumQuarterlyModel version change, new data source, user complaint
HighMonthlyAny change to the AI system or its operating context
CriticalWeekly until downgradedImmediate review after any incident

At each review, update the register with: any new risks identified, changes to severity or likelihood scores, new or modified controls, and reassigned ownership if teams have changed.

How to start

  1. List every AI system your organization uses or is considering. Include vendor tools, internal models, and AI features embedded in other software.
  2. For each system, identify at least three risks using the categories: data, reliability, compliance, bias, and security.
  3. Score each risk using the severity and likelihood scales above.
  4. Document existing controls. Be specific: "we review 10% of AI decisions" is better than "human oversight."
  5. Assign an owner to every risk. Unowned risks do not get managed.
  6. Set a review date based on the residual risk level.
  7. Store the register where the team can access it. A shared spreadsheet works for small teams. Larger organizations may need a GRC (governance, risk, compliance) tool.

Sources

  1. [1]
  2. [2]
  3. [3]
  4. [4]
    OECD AI PrinciplesIndependent Review

Related