This diagram breaks down a single COMPEL lifecycle stage into its constituent elements: defined inputs that trigger the stage, key activities performed during execution, outputs and deliverables produced, and quality gate criteria that must be satisfied before advancing. The visual flow from inputs through activities to outputs shows how each stage transforms organizational capability, while gate criteria ensure governance rigor is maintained throughout the transformation journey without sacrificing delivery momentum.
Stage 3 of 6
Model
Classify AI models and systems by risk, define human validation rules and explainability requirements, and establish the control framework that governs AI behavior including autonomous agents and foundation models. Operationally, produce validated system classifications, explainability specifications, control-to-risk traceability, agent autonomy classifications, foundation-model selection scorecards, model cards, and a fine-tuning governance policy for every registered AI system.
Strategic Objective
Classify AI models and systems by risk, define human validation rules and explainability requirements, and establish the control framework that governs AI behavior.
Operational Objective
Produce validated system classifications, explainability specifications, control requirement documents, and agent autonomy classifications for every registered AI system.
-
Inputs
- from organize: Target Operating Model
- from organize: Governance Structure and Committee Charters
- from organize: Capability Roadmap
- Data Estate Inventory
- Model Registry Standards
- ML Platform Reference Architecture
-
Activities (19)
- AI System Registry design and population
- Policy framework development
- Risk assessment framework creation
- Vendor and third-party AI evaluation
- Decision flow documentation
- Bias testing framework design
- Red teaming protocol design
- Human-AI collaboration modeling
- Incident response procedure design
- Foundation model selection governance
- Provider vs deployer obligation mapping
- Model card requirements and verification
- Fine-tuning governance policies
- Training data governance requirements
- Model lifecycle management (versioning, deprecation, replacement)
- Agent interaction policy and trust boundary design
- A2A governance rules and autonomy tier assignments
- Vendor risk scoring model and AI-BOM template design
- Model provenance requirements definition
-
Quality Gate — Gate M
- Design documents approved
- Risk framework defined
- AI system registry populated
-
Outputs (15)
- Comprehensive AI system inventory
- Responsible AI policy library
- Risk assessment templates and rubrics
- Vendor risk assessment criteria
- Decision documentation standards
- Human-AI Collaboration Blueprints
- Data Readiness Reports per AI system
- Decision Log Templates
- Foundation Model Selection Criteria scorecard
- Provider-Deployer obligation mapping
- Model Card templates (EU AI Act Article 53 compliant)
- Fine-Tuning Governance Policy
- Model Lifecycle Management Plan
- Agent interaction policies and trust boundary specifications
- Vendor risk scoring model and AI-BOM template specification
-
Handoffs
- → Produce: Validated model designs
- → Produce: Data contracts
- → Produce: Evaluation criteria
- → Produce: Policy framework and risk rubrics
- → Produce: AI system registry
What Are the Inputs for the Model Stage?
External inputs (3)
-
Data Estate Inventory
A current inventory of data sources, lineage, and classification. Model uses this to define data contracts, training data governance, and AI-BOM provenance requirements.
DAMA-DMBOKDCAMGDPR Article 30 Records of Processing -
Model Registry Standards
The organization's standards for cataloging, versioning, and documenting machine learning models. Model uses these to design AI System Registry and Model Card requirements that integrate with existing MLOps tooling.
MLflow Model RegistryGoogle Model CardsEU AI Act Article 53 -
ML Platform Reference Architecture
The organization's reference architecture for machine learning and generative AI platforms. Model uses this to ground policy controls and risk rubrics in the platform reality engineers actually deploy on.
TOGAF Phase C (Application & Data Architecture)AWS/GCP/Azure ML reference architectures
Handoff inputs from prior stages (3)
-
Target Operating Model
from OrganizeThe CoE structure, federation strategy, and operating model defined in Organize. Model uses this to assign policy ownership, scope the AI system registry, and align frameworks to who will run them.
COMPEL Stage — Organize -
Governance Structure and Committee Charters
from OrganizeThe committees, escalation paths, and decision rights stood up in Organize. Model uses this so policies and risk frameworks have a real approving body and a defined path for exceptions.
COMPEL Stage — Organize -
Capability Roadmap
from OrganizeThe phased build-out of AI capabilities planned during Organize. Model uses the roadmap to sequence policy creation, registry population, and risk framework rollouts so they land just-in-time.
COMPEL Stage — Organize
What Activities Occur During the Model Stage?
- → AI System Registry design and population
- → Policy framework development
- → Risk assessment framework creation
- → Vendor and third-party AI evaluation
- → Decision flow documentation
- → Bias testing framework design
- → Red teaming protocol design
- → Human-AI collaboration modeling
- → Incident response procedure design
- → Foundation model selection governance
- → Provider vs deployer obligation mapping
- → Model card requirements and verification
- → Fine-tuning governance policies
- → Training data governance requirements
- → Model lifecycle management (versioning, deprecation, replacement)
- → Agent interaction policy and trust boundary design
- → A2A governance rules and autonomy tier assignments
- → Vendor risk scoring model and AI-BOM template design
- → Model provenance requirements definition
What Are the Outputs of the Model Stage?
- ✓ Comprehensive AI system inventory
- ✓ Responsible AI policy library
- ✓ Risk assessment templates and rubrics
- ✓ Vendor risk assessment criteria
- ✓ Decision documentation standards
- ✓ Human-AI Collaboration Blueprints
- ✓ Data Readiness Reports per AI system
- ✓ Decision Log Templates
- ✓ Foundation Model Selection Criteria scorecard
- ✓ Provider-Deployer obligation mapping
- ✓ Model Card templates (EU AI Act Article 53 compliant)
- ✓ Fine-Tuning Governance Policy
- ✓ Model Lifecycle Management Plan
- ✓ Agent interaction policies and trust boundary specifications
- ✓ Vendor risk scoring model and AI-BOM template specification
Key Questions
- ? What policies do we need for responsible AI?
- ? How should we classify and register AI systems?
- ? What risk frameworks apply to our context?
- ? How do we handle third-party AI in our governance model?
- ? How do we govern foundation model selection and fine-tuning?
- ? What trust boundaries and A2A governance rules apply to our agentic systems?
- ? What provenance and AI-BOM standards do we require for supply chain components?
What Are the Gate Criteria for Model?
- ⚠ All registered AI systems classified by risk tier
- ⚠ Human validation rules defined for high-risk systems
- ⚠ Explainability requirements documented per system and audience
- ⚠ Control requirements matrix complete with evidence specifications
- ⚠ Agent autonomy levels classified for all autonomous systems
- ⚠ Gate M review passed with design documents approved
Related Articles (565)
Articles from the Body of Knowledge that are tagged to the Model stage or are lifecycle-wide and apply here.
- M1.1The AI Transformation Imperative
- M1.1What Data Readiness Is (and What It Is Not)
- M1.1The Enterprise AI Reference Architecture
- M1.1The LLM Risk Surface
- M1.1Defining AI Transformation vs. AI Adoption
- M1.1Data Quality Dimensions Extended for AI
- M1.1Model Selection Decision Framework
- M1.1Prompt Injection and Jailbreak Mitigation
- M1.1The Enterprise AI Maturity Spectrum
- M1.1Data Governance and Data Contracts
- M1.1Prompt Architecture: Templates, Versioning, Injection Defense
- M1.1Hallucination, Grounding, and Output Integrity
- M1.1Introduction to the COMPEL Framework
- M1.1Data Lineage, Provenance, and Documentation
- M1.1Retrieval-Augmented Generation: When, Why, How Much
- M1.1Guardrails and Content Safety Architecture
- M1.1The Four Pillars of AI Transformation
- M1.1Labeling Strategy and Annotation Governance
- M1.1Chunking and Embedding Strategy
- M1.1Evaluation, Red-Teaming, and Monitoring
- M1.1AI Transformation Anti-Patterns
- M1.1Feature Stores and Vector Stores as Governance Artifacts
- M1.1Vector Stores: Selection, Hybrid Retrieval, and Reranking
- M1.1Regulatory Obligations and Incident Response
Which Knowledge Domains Apply to Model?
- AI Use Case Management321 articles
- AI Project Delivery312 articles
- AI Governance Structure142 articles
- AI Leadership and Sponsorship141 articles
- AI Strategy and Alignment130 articles
- Change Management Capability121 articles
- AI Literacy and Culture121 articles
- Regulatory Compliance120 articles