The AI Risk Register (v2)

Because Some Risks Don’t Look Like Risks—Until It’s Too Late

image

Coffeehouse Conversations Get Weird

In my favorite coffee shop, I overheard a conversation about whether an AI platform could handle secure credentials. One person asked, "Does it even support that?" Another paused, then said: "Honestly, we haven't tested it."

That’s when it hit me: We’re treating AI like it’s magic. And we’re running it in production.

My friends and I started cataloging the new risk that AI introduces, this is a continuation of that work.

Why We Need a Register

AI introduces invisible complexity—with risks scattered across prompts, data sources, model updates, and downstream usage. Traditional threat models don't catch it all. And governance teams often get looped in after something goes wrong.

That’s why we need a lightweight, living AI Risk Register. Not for compliance theater. For shared visibility. When risks are documented, they become discussable. When they’re discussable, they can be tested, mitigated, and watched.

image

Sample AI Risk Register: Core Categories

Below are four risk categories, each containing real-world concerns. Add to this. Customize it. But most importantly—track your new AI risks.

Prompt-Level & Model-Level Risks

These risks arise from how prompts are crafted, interpreted, and executed—especially when user inputs steer model behavior in unintended directions.

Prompt Injection

  • Issue: Malicious manipulation of inputs to alter behavior.
  • Mitigation: Sanitize inputs. Monitor for anomalous completions.

Data Leakage in Prompts

  • Issue: Exposing PII or internal info via poorly structured prompts.
  • Mitigation: Mask sensitive fields, use redaction filters.

Hallucination

  • Issue: Confident delivery of false or fabricated information.
  • Mitigation: Require source tracebacks, implement human-in-loop review.

Mode Confusion

  • Issue: When a model trained for chat is used for decision-making.
  • Mitigation: Validate use case fit, define allowable outputs.

Process & Output-Level Risks

These reflect how AI-generated content can change across time and context, sometimes in unpredictable ways.

Non-Deterministic Responses

  • Issue: Same input yields different outputs.
  • Mitigation: Log prompts/responses, version prompt templates.

Degraded Reasoning Over Time

  • Issue: Behavior shift after model update.
  • Mitigation: Revalidate on update, baseline before/after comparisons.

Prompt Drift

  • Issue: Users continually tweak prompts to get what they want—sometimes beyond safe bounds.
  • Mitigation: Version prompt scripts. Define thresholds.

Security, Privacy & Legal

These risks center on legal exposure, privacy breaches, and unintended access or retention.

Unauthorized Data Retention

  • Issue: User data held beyond permitted time.
  • Mitigation: Validate vendor policies. Use anonymization layers.

Third-Party API Exposure

  • Issue: Plugging LLMs into insecure plugin ecosystems.
  • Mitigation: Isolate plugins. Monitor network calls.

Copyright Ambiguity

  • Issue: Generated text or images pulled from training data.
  • Mitigation: Add disclaimers. Track source provenance.

Prompt-as-Policy

  • Issue: When prompt logic effectively creates rules (e.g., underwriting, eligibility).
  • Mitigation: Conduct policy reviews with legal/audit.

Organizational & Strategic

These risks affect culture, visibility, and how AI is operationalized inside teams.

Shadow AI

  • Issue: Employees using unvetted tools to boost productivity.
  • Mitigation: Offer safe AI sandboxes. Reward responsible use.

Compliance Blind Spots

  • Issue: AI use not covered in existing audits.
  • Mitigation: Add AI modules to compliance checklists.

Misaligned Trust Curves

  • Issue: Believing AI is smarter—or safer—than it is.
  • Mitigation: Train teams on AI limitations. Monitor reliance.

Loss of Explainability

  • Issue: Decisions made without human-understandable reasoning.
  • Mitigation: Require decision trees or narratives alongside outputs.

Where This Fits in the Big Picture

Many governments and enterprises are building formal AI risk frameworks:

  • NIST’s AI RMF maps risks across functions: Govern, Map, Measure, Manage
  • The UK has explored a national AI Risk Register
  • ISO and IEEE standards include ethical and operational alignment

Your org doesn’t need to match those overnight. But your team can adopt a mini version—today.

It’s a living document, not a compliance artifact.

Let’s Build Better Together

If you already maintain an AI Risk Register, I’d love to hear what’s on it. If you don’t, maybe this helps you start. Either way, let’s treat AI governance as an evolving craft.

What’s missing? What’s changing?

I’ll be sharing follow-up posts on:

  • Scoring and quantifying AI risks
  • Tracking real-world incidents
  • Building AI risk dashboards that evolve with usage

We can’t eliminate all risks. But we can stop being surprised by them.

Date
June 20, 2025
Sections
Types
Article