This diagram breaks down a single COMPEL lifecycle stage into its constituent elements: defined inputs that trigger the stage, key activities performed during execution, outputs and deliverables produced, and quality gate criteria that must be satisfied before advancing. The visual flow from inputs through activities to outputs shows how each stage transforms organizational capability, while gate criteria ensure governance rigor is maintained throughout the transformation journey without sacrificing delivery momentum.
Stage 4 of 6
Produce
Execute workflow redesign, validate deployment readiness through quality gates, and activate monitoring, training, and control systems for production AI operations. Operationally, implement redesigned workflows with embedded AI, configure telemetry and kill-switches, activate all specified controls, complete training and adoption, and initiate regulatory compliance evidence collection across EU AI Act, NIST AI RMF, and ISO 42001.
Strategic Objective
Execute workflow redesign, validate deployment readiness through quality gates, and activate monitoring, training, and control systems for production AI operations.
Operational Objective
Implement redesigned workflows with embedded AI capabilities, configure telemetry and monitoring, complete training and adoption activities, and activate all specified controls.
-
Inputs
- from model: Validated Model Designs
- from model: Data Contracts
- from model: Evaluation Criteria
- Engineering Coding Standards
- MLOps Platform
- Deployment Runbooks
-
Activities (15)
- Controls library implementation
- Compliance framework alignment
- Policy library deployment
- Workflow builder configuration
- Evidence collection process setup
- Stakeholder validation of artifacts
- Bias testing execution
- Red teaming execution
- Monitoring infrastructure build
- Training delivery and certification
- MLOps pipeline integration
- Agent deployment gates and kill-switch configuration
- Agent monitoring infrastructure setup
- Vendor onboarding gate execution and AI-BOM validation
- Supply chain monitoring deployment
-
Quality Gate — Gate P
- Controls implemented
- Evidence collection active
- Policies published
- All applicable regulatory requirements identified
- EU AI Act risk classification confirmed
- Applicable US state requirements documented
- Regulatory compliance evidence collection initiated
-
Outputs (10)
- Operational control library
- Framework compliance dashboards
- Published and attested policies
- Automated transformation workflows
- Evidence repository with mapping
- Audit evidence pack
- Monitoring dashboard suite
- Workflow Configuration Documentation
- Agent deployment gate records and production readiness checklists
- Vendor onboarding records and validated AI-BOMs
-
Handoffs
- → Evaluate: Deployed models
- → Evaluate: Monitoring instrumentation
- → Evaluate: Operational controls and evidence
What Are the Inputs for the Produce Stage?
External inputs (3)
-
Engineering Coding Standards
The organization's standards for code quality, security review, and source control. Produce embeds these into MLOps pipelines so AI controls inherit existing engineering rigor.
OWASP SAMMNIST SSDFInternal engineering standards -
MLOps Platform
The deployed platform for model training, deployment, and monitoring. Produce configures controls, evidence capture, and kill-switches against this concrete platform.
Google MLOps Maturity ModelCD Foundation MLOps SIG -
Deployment Runbooks
The standard operating procedures for promoting workloads to production. Produce uses runbooks to define agent deployment gates, kill-switch procedures, and incident playbooks.
Google SRE WorkbookITIL 4 Service Transition
Handoff inputs from prior stages (3)
-
Validated Model Designs
from ModelThe approved model architectures, data contracts, and risk rubrics from Model. Produce uses these as the build specification for controls, MLOps pipelines, and evidence collection.
COMPEL Stage — Model -
Data Contracts
from ModelThe data interface and quality contracts defined in Model. Produce uses contracts to wire automated quality gates and lineage capture into the MLOps pipeline.
COMPEL Stage — Model -
Evaluation Criteria
from ModelThe acceptance, fairness, and performance thresholds defined in Model. Produce builds bias testing, red teaming, and gate checks against these criteria so deployment decisions are objective.
COMPEL Stage — Model
What Activities Occur During the Produce Stage?
- → Controls library implementation
- → Compliance framework alignment
- → Policy library deployment
- → Workflow builder configuration
- → Evidence collection process setup
- → Stakeholder validation of artifacts
- → Bias testing execution
- → Red teaming execution
- → Monitoring infrastructure build
- → Training delivery and certification
- → MLOps pipeline integration
- → Agent deployment gates and kill-switch configuration
- → Agent monitoring infrastructure setup
- → Vendor onboarding gate execution and AI-BOM validation
- → Supply chain monitoring deployment
What Are the Outputs of the Produce Stage?
- ✓ Operational control library
- ✓ Framework compliance dashboards
- ✓ Published and attested policies
- ✓ Automated transformation workflows
- ✓ Evidence repository with mapping
- ✓ Audit evidence pack
- ✓ Monitoring dashboard suite
- ✓ Workflow Configuration Documentation
- ✓ Agent deployment gate records and production readiness checklists
- ✓ Vendor onboarding records and validated AI-BOMs
Key Questions
- ? Are our controls effectively mitigating identified risks?
- ? How do we track compliance across frameworks?
- ? What evidence do we need for audit readiness?
- ? Are all AI systems mapped to their controlling policies?
- ? Are all agents tested, monitored, and kill-switch verified before production?
- ? Are all vendor AI components onboarded with validated AI-BOMs?
What Are the Gate Criteria for Produce?
- ⚠ All workflows redesigned and implemented per specifications
- ⚠ Deployment readiness gate passed
- ⚠ Telemetry and monitoring fully configured and tested
- ⚠ Training completed for all impacted user groups
- ⚠ All specified controls activated and verified
- ⚠ Evidence collection processes operational
- ⚠ Gate P review passed
Related Articles (550)
Articles from the Body of Knowledge that are tagged to the Produce stage or are lifecycle-wide and apply here.
- M1.1The AI Transformation Imperative
- M1.1What Data Readiness Is (and What It Is Not)
- M1.1The Enterprise AI Reference Architecture
- M1.1The LLM Risk Surface
- M1.1Defining AI Transformation vs. AI Adoption
- M1.1Data Quality Dimensions Extended for AI
- M1.1Model Selection Decision Framework
- M1.1Prompt Injection and Jailbreak Mitigation
- M1.1The Enterprise AI Maturity Spectrum
- M1.1Data Governance and Data Contracts
- M1.1Prompt Architecture: Templates, Versioning, Injection Defense
- M1.1Hallucination, Grounding, and Output Integrity
- M1.1Introduction to the COMPEL Framework
- M1.1Data Lineage, Provenance, and Documentation
- M1.1Retrieval-Augmented Generation: When, Why, How Much
- M1.1Guardrails and Content Safety Architecture
- M1.1The Four Pillars of AI Transformation
- M1.1Labeling Strategy and Annotation Governance
- M1.1Chunking and Embedding Strategy
- M1.1Evaluation, Red-Teaming, and Monitoring
- M1.1AI Transformation Anti-Patterns
- M1.1Feature Stores and Vector Stores as Governance Artifacts
- M1.1Vector Stores: Selection, Hybrid Retrieval, and Reranking
- M1.1Regulatory Obligations and Incident Response
Which Knowledge Domains Apply to Produce?
- AI Project Delivery325 articles
- AI Use Case Management321 articles
- AI Leadership and Sponsorship151 articles
- Change Management Capability131 articles
- AI Literacy and Culture131 articles
- AI Strategy and Alignment130 articles
- AI Governance Structure104 articles
- AI/ML Platform and Tooling102 articles