Skip to main content
AITF M1.21-Art03 v1.0 Reviewed 2026-04-06 Open Access
M1.21 M1.21
AITF · Foundations

Exception Management for AI Policies

Exception Management for AI Policies — AI Use Case Management — Foundation depth — COMPEL Body of Knowledge.

5 min read Article 3 of 4

This article distinguishes exception management from risk acceptance, describes the structural elements of a credible exception workflow, and warns against the patterns that turn the exception register into a parallel policy regime.

Exception Management vs Risk Acceptance

The two workflows are related but distinct. Risk acceptance addresses a known residual risk in a system that the policy permits. Exception management addresses a deviation from a policy itself — a request to do something the policy does not currently allow, or to skip a control the policy requires.

Confusing the two leads to two failure patterns. Treating exceptions as risk acceptances inflates the acceptance register and obscures the real policy gaps. Treating risk acceptances as exceptions implies the policy itself is broken and creates pressure to amend it prematurely. The COMPEL methodology keeps them as separate workflows feeding a shared governance dashboard.

Common Triggers for AI Policy Exceptions

Exceptions cluster around predictable points in the AI lifecycle. The most common categories include:

  • Speed-to-market pressure: a business unit needs to launch a Generative AI feature in three weeks and the standard 12-week ethics review timeline cannot accommodate the deadline.
  • Vendor constraints: a Software as a Service (SaaS) provider does not expose the model card or training data documentation that internal policy requires.
  • Foundation-model opacity: a third-party Large Language Model (LLM) provides no fine-grained explainability output that the explainability policy requires.
  • Sandbox and proof-of-concept work: experimental systems that the policy treats as production-grade for review purposes.
  • Cross-jurisdictional variation: a system designed to a strict standard for one market is asked to operate in a market with different requirements.

Each trigger should be addressable by a specific exception type with a pre-published evaluation rubric.

Structural Elements of the Workflow

Intake

Exception requests enter the workflow through a single channel — typically a form embedded in the AI governance platform — that captures the policy reference, the requested deviation, the business rationale, the proposed compensating controls, the requested duration, and the named requestor and sponsor. The U.S. National Institute of Standards and Technology AI RMF Playbook at https://airc.nist.gov/AI_RMF_Knowledge_Base/Playbook explicitly recommends structured intake for policy variances.

Triage

A triage step routes requests by category and materiality. Low-impact, short-duration exceptions might follow an expedited path with single-approver authority. High-impact, long-duration exceptions require full committee review.

Evaluation

Evaluators consider four dimensions:

  1. Necessity: is the deviation genuinely required, or could the original policy be met with more effort?
  2. Materiality: what is the worst plausible outcome if the deviation is granted?
  3. Compensating controls: do the proposed controls adequately mitigate the policy gap?
  4. Duration: is the requested duration the minimum necessary?

The Office of Management and Budget Memorandum M-24-10 on Advancing Governance, Innovation, and Risk Management for Agency Use of AI at https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10-Advancing-Governance-Innovation-and-Risk-Management-for-Agency-Use-of-Artificial-Intelligence.pdf describes a similar evaluation pattern for federal agency AI exceptions.

Decision

Decisions should be in writing, citing the policy reference, the granted deviation, the conditions, the duration, the required compensating controls, and the named accepter.

Tracking

Active exceptions populate a register visible to the AI governance committee. The register should be queryable by policy, business unit, sponsor, and expiry date. Patterns in the register often reveal that a policy is unworkable as written.

Retirement

Exceptions retire either by expiry, by the underlying condition becoming compliant, or by extension through a fresh evaluation. Automatic retirement reminders should fire 30 days, 14 days, and 7 days before expiry.

The Compensating Controls Catalogue

A mature program publishes a catalogue of pre-approved compensating controls that requestors can select from. Common entries include enhanced monitoring, bounded scope, human-in-the-loop, external attestation, and automatic kill-switch.

The European Union AI Act Article 14 at https://artificialintelligenceact.eu/article/14/ on human oversight provides language that translates well into compensating-control specifications for high-risk systems.

Authority and Independence

Authority should scale with materiality and time. A 30-day low-impact exception might be approved by a single AI governance officer; a 12-month exception affecting a high-risk system requires committee approval. Authority must be independent: the requestor cannot be the approver.

The Bank for International Settlements consultative document on Principles for the Sound Management of Operational Risk at https://www.bis.org/bcbs/publ/d515.htm discusses the importance of independent challenge in exception governance.

The Anti-Pattern: Standing Exceptions

The most dangerous pattern is the standing exception — a deviation that has been renewed so many times it has effectively become policy. Two countermeasures help. First, every exception that has been renewed twice should automatically trigger a policy review. Second, the exception register should expose renewal counts visibly. The U.S. Government Accountability Office report GAO-21-519SP on AI Accountability Framework at https://www.gao.gov/products/gao-21-519sp explicitly highlights persistent waivers as an audit risk indicator.

Aggregation and Insight

Beyond per-case management, the exception register is a source of organisational insight. Frequent exceptions to a particular policy clause indicate that the clause may be unworkable. Frequent requests from a particular business unit indicate a capability gap or training need.

Quarterly exception analytics should be presented to the AI governance committee alongside the heat-map review.

Integration with Audit

Internal audit should test the exception workflow at least annually. Audit findings frequently surface either weak documentation or shadow exceptions — deviations being run informally without going through the workflow at all.

Looking Forward

The next article in Module 1.21 addresses audit trails for AI decisions — the technical infrastructure that gives exception management, risk acceptance, and the heat map their evidentiary weight.


© FlowRidge.io — COMPEL AI Transformation Methodology. All rights reserved.