This article presents the design of a four-tier program, defines the controls applicable at each tier, anchors the program to current standards, and explains how tier assignment is determined, refreshed, and enforced across the organization. It is the operational synthesis of the prior fourteen articles.
Why Tiering Is the Cornerstone
Three failure modes recur in AI vendor risk programs that lack tiering.
The first is uniform-controls overload. Programs that apply the same diligence to every vendor produce a backlog that procurement and risk cannot clear. Business units route around the program; the program covers a small share of actual AI usage; coverage gaps appear precisely where the organization claims to be governed.
The second is uniform-controls underreach. Programs that apply the same lightweight diligence to every vendor cover everything but evaluate nothing meaningfully. A simple questionnaire applied to a high-risk EU AI Act system is theatre; the same questionnaire applied to a low-stakes drafting tool is appropriate. Without tiering, both end up with the wrong control depth.
The third is business-team disengagement. When the program does not differentiate, business teams treat all controls as bureaucratic friction. Where differentiation is visible — minimal controls for low-risk acquisitions, substantial controls for high-risk ones — business teams understand the rationale and engage with it.
The European Union (EU) AI Act, accessible at https://artificialintelligenceact.eu/, codifies tiering at the regulatory level: prohibited, high-risk, limited-risk, and minimal-risk categories receive correspondingly different obligations. The U.S. National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF) at https://www.nist.gov/itl/ai-risk-management-framework treats risk-based prioritisation as the foundation of the GOVERN, MAP, MEASURE, and MANAGE functions. The International Organization for Standardization / International Electrotechnical Commission (ISO/IEC) 42001:2023 standard at https://www.iso.org/standard/81230.html management-system controls assume that controls are calibrated to risk.
The Four Tiers
A pragmatic tiered program defines four tiers, each with explicit triggers, required controls, and approval authority.
Minimal Tier
Internal-use, low-stakes, low-data-sensitivity AI tools. Examples: drafting assistants used by individual employees on non-sensitive content; productivity AI features bundled into approved Software as a Service (SaaS) tools; experimental tools used in sandboxed environments without customer or regulated data.
Required controls: confirmation that the tool is on an approved-product list or below an exception threshold, basic acceptable-use guidance, and lightweight self-attestation. The Cloud Security Alliance at https://cloudsecurityalliance.org/ guidance for low-risk SaaS AI features anchors the practice.
Standard Tier
Internal or limited customer-facing AI used in routine business processes; non-decision-making support roles; no special-category data; no regulatory categorisation as high-risk.
Required controls: a slimmed-down version of the eight diligence domains (Article 3 of this module) covering legal, security, data handling, and operational continuity; standard contract clauses on data use, sub-processors, and termination (Article 4); inclusion in the AI Bill of Materials (Article 6); periodic monitoring of vendor status pages (Article 10); inclusion in the general incident-response playbook (Article 14).
Enhanced Tier
Customer-facing AI, AI informing material decisions, AI processing special-category data, AI categorised as high-risk under the EU AI Act, and AI used in regulated processes (financial services, healthcare, employment, education, critical infrastructure).
Required controls: full eight-domain diligence with documented evidence; full twelve-clause-family contract treatment; AI-BOM with provenance entries (Articles 6, 7); pre-production red teaming (Article 9); continuous behavioural monitoring (Article 10); cross-border-transfer assessment and sovereignty posture documentation (Article 12); tier-specific incident-response playbook with notification web (Article 14); annual reassessment.
Critical Tier
AI on which the operation of the business or the safety of customers, employees, or the public depends. Foundation models that anchor multiple production systems. Vendors where insolvency or strategic withdrawal would materially harm the organization.
Required controls: all enhanced-tier controls, plus board-level visibility, multi-vendor architecture or documented exit plan (Article 11), participation in industry information sharing, joint vendor incident exercises, and senior-executive sponsor engagement with the vendor relationship. The U.S. National Institute of Standards and Technology (NIST) Special Publication (SP) 800-161 Revision 1 at https://csrc.nist.gov/pubs/sp/800/161/r1/final treats critical suppliers as a distinct category requiring enhanced supply-chain controls; the AI version applies the same logic.
Tier Assignment Rules
Tier assignment is a procurement-time decision (Article 13) using a written rubric, not a judgement call by the requesting team. The rubric typically combines four inputs:
- Use case categorisation under the EU AI Act and other applicable regulatory regimes — high-risk categorisation forces enhanced tier or above.
- Data sensitivity — special-category personal data, financial data, health data, biometric data, or trade secrets force enhanced tier or above.
- Decision impact — material decisions about individuals (credit, employment, health, legal status) force enhanced tier or above.
- Operational dependency — material harm from vendor failure forces critical tier.
Where multiple inputs apply, the highest-tier rule prevails. Tier assignment is reassessed annually and on material change (model upgrade, new use case, regulatory development).
Standards That Anchor the Program
Beyond the references already cited, three additional anchors complete the standards floor for a tiered program. The U.S. Cybersecurity and Infrastructure Security Agency (CISA) Software Bill of Materials programme at https://www.cisa.gov/sbom anchors AI-BOM completeness expectations. The Supply-chain Levels for Software Artifacts (SLSA) framework at https://slsa.dev/ anchors build-pipeline integrity expectations. The Software Package Data Exchange (SPDX) standard at https://spdx.dev/ anchors machine-readable component declarations. The Hugging Face Safetensors documentation at https://huggingface.co/docs/safetensors anchors model-weight integrity expectations. The Stanford Foundation Model Transparency Index at https://crfm.stanford.edu/fmti/ provides the comparative-transparency reference that informs vendor selection at higher tiers.
Operating Model
A tiered program is more than a control matrix. It requires an operating model with named roles: an AI vendor risk lead in the second-line risk function, named system owners in the first line, an AI Bill of Materials maintainer, an incident-response coordinator, and an executive sponsor accountable to the board for AI supply-chain posture.
It requires a cadence: weekly intake review for new vendor requests, monthly status reviews of in-flight diligence, quarterly portfolio reviews of approved vendors, annual reassessment of every standard-or-above-tier vendor, and continuous monitoring of critical-tier vendors.
It requires tooling: a vendor inventory system, an AI-BOM repository, a contract-clause tracking system, a monitoring platform (for the workstreams in Article 10), and an incident-response platform that integrates with the broader cybersecurity incident regime.
And it requires reporting: tier-distribution and exception-rate metrics to the executive risk committee, vendor-incident summary to the board, regulatory-compliance posture to the audit committee, and a public-facing AI supply-chain transparency disclosure where the organization’s customers expect it.
Maturity Indicators
| Maturity | What a tiered AI vendor risk program looks like |
|---|---|
| Foundational (1) | No tiering; AI vendors are treated identically under generic IT vendor management; coverage is partial and inconsistent. |
| Developing (2) | Informal tiering exists; rules are not codified; tier assignment is inconsistent across the organization. |
| Defined (3) | The four tiers are codified; assignment rubric is enforced at procurement; required controls are mapped to each tier; governance roles are named. |
| Advanced (4) | Tier-specific controls operate continuously; reassessment is automated and timely; metrics are reported to executive and board levels; exception rates are managed downward. |
| Transformational (5) | The program contributes to industry vendor-risk frameworks; vendor performance against the program informs strategic supplier selection; the organization is cited as a reference. |
Practical Application
A multinational manufacturer launching an enterprise-wide AI vendor risk program should not start by writing detailed control documents for every conceivable scenario. It should start by codifying the four tiers, the assignment rubric, and the procurement gate. In the first ninety days, every existing AI vendor is classified, the highest-risk vendors receive a focused diligence-and-contracting refresh, and the AI Bill of Materials is populated for the top fifty systems. In the first year, the program reaches steady-state coverage of all standard-and-above-tier vendors, the monitoring workstreams are operational for all enhanced-and-critical-tier vendors, the incident-response playbooks are tabletop-exercised, and the executive risk committee receives a quarterly portfolio report. The program never reaches “complete” — the supply chain mutates continuously — but it reaches a state where the organization can answer, with evidence, the question every regulator, board, and customer is now beginning to ask: how do you know what your AI suppliers are doing on your behalf?
This article closes Module 1.10. The fifteen articles together form a complete, defensible foundation for AI supply-chain and third-party governance — from the supply-chain map and foundation-model assessment of Articles 1 and 2, through the diligence, contracting, open-source, AI-BOM, provenance, hidden-API, red-team, monitoring, architecture, sovereignty, procurement, and incident-response controls of Articles 3 through 14, to the tiered-program operating model presented here. Subsequent modules in the COMPEL Body of Knowledge build on this foundation; the supply-chain assumptions made elsewhere in the framework are the assumptions established in Module 1.10.