This diagram breaks down a single COMPEL lifecycle stage into its constituent elements: defined inputs that trigger the stage, key activities performed during execution, outputs and deliverables produced, and quality gate criteria that must be satisfied before advancing. The visual flow from inputs through activities to outputs shows how each stage transforms organizational capability, while gate criteria ensure governance rigor is maintained throughout the transformation journey without sacrificing delivery momentum.
Stage 6 of 6
Learn
Extract actionable insights from evaluation findings to update policies, capture reusable patterns, recalibrate benchmarks, and make informed scaling, retirement, or redesign decisions. Operationally, produce a policy update register, pattern library entries, recalibrated benchmark targets, scaling decision records, and a continuous-improvement backlog that feeds the next Calibrate cycle.
Strategic Objective
Extract actionable insights from evaluation findings to update policies, capture reusable patterns, update benchmarks, and make informed scaling, retirement, or redesign decisions.
Operational Objective
Produce updated policy documents, pattern library entries, updated benchmark targets, scaling recommendations, and retirement/redesign decisions for the next COMPEL cycle.
-
Inputs
- from evaluate: Evaluation Reports
- from evaluate: Incident Logs
- from evaluate: Drift Findings
- from evaluate: Audit Findings and Gate Decisions
- Retrospective Cadence
- Learning Loop Forum
- Knowledge Management Practices
-
Activities (9)
- Metrics dashboard monitoring
- Incident management and post-incident review
- Augmentation ROI measurement
- Continuous improvement cycles
- Change detection and response
- Knowledge base curation
- ROI measurement and reporting
- Calibrate cycle feed preparation
- Knowledge management updates
-
Quality Gate — Gate L
- Metrics analyzed
- Improvement plan created
- Knowledge base updated
-
Outputs (9)
- KPI/KRI trend reports
- Incident reports with lessons learned
- ROI analysis and value reports
- Improvement initiative tracker
- Drift and change detection alerts
- Model retirement lessons captured
- AI Performance Dashboard
- Continuous Improvement Register
- Next-Cycle Calibrate Inputs
-
Handoffs
- → Calibrate: Improvement recommendations and updated baselines
- → Calibrate: Next-cycle Calibrate inputs
What Are the Inputs for the Learn Stage?
External inputs (3)
-
Retrospective Cadence
The organization's standard rhythm and format for retrospectives and post-incident reviews. Learn uses this cadence so AI-specific learning loops integrate with existing agile and operations practices.
Scrum Guide (Sprint Retrospective)Google SRE Postmortem Culture -
Learning Loop Forum
A standing cross-functional forum where AI lessons learned are shared and acted on. Learn uses this forum to socialize improvement recommendations and to keep the COMPEL cycle visibly continuous rather than annual.
SAFe Inspect & AdaptCommunities of Practice -
Knowledge Management Practices
The organization's standards for capturing, indexing, and reusing institutional knowledge. Learn uses these so AI lessons land in systems people already consult, not in orphaned documents.
ISO 30401 (Knowledge Management)PMBOK 7 Lessons Learned Repository
Handoff inputs from prior stages (4)
-
Evaluation Reports
from EvaluateThe gate review decisions, conformity assessments, and audit findings produced in Evaluate. Learn uses these to extract patterns and feed measurable improvements into the next Calibrate cycle.
COMPEL Stage — Evaluate -
Incident Logs
from EvaluateThe catalog of AI incidents, near-misses, and operational events captured during Evaluate. Learn analyzes these to find systemic root causes rather than blame individual operators.
COMPEL Stage — Evaluate -
Drift Findings
from EvaluateData drift, concept drift, and behavior drift signals raised during Evaluate. Learn uses drift evidence to drive retraining decisions, model retirement, and updated risk thresholds.
COMPEL Stage — Evaluate -
Audit Findings and Gate Decisions
from EvaluateThe remediation backlog generated by audits and gate reviews during Evaluate. Learn uses these to prioritize continuous improvement initiatives and update knowledge base content.
COMPEL Stage — Evaluate
What Activities Occur During the Learn Stage?
- → Metrics dashboard monitoring
- → Incident management and post-incident review
- → Augmentation ROI measurement
- → Continuous improvement cycles
- → Change detection and response
- → Knowledge base curation
- → ROI measurement and reporting
- → Calibrate cycle feed preparation
- → Knowledge management updates
What Are the Outputs of the Learn Stage?
- ✓ KPI/KRI trend reports
- ✓ Incident reports with lessons learned
- ✓ ROI analysis and value reports
- ✓ Improvement initiative tracker
- ✓ Drift and change detection alerts
- ✓ Model retirement lessons captured
- ✓ AI Performance Dashboard
- ✓ Continuous Improvement Register
- ✓ Next-Cycle Calibrate Inputs
Key Questions
- ? What is the ROI of our responsible AI investment?
- ? What patterns emerge from incidents?
- ? How can we improve transformation effectiveness?
- ? Are our AI risk indicators trending in the right direction?
What Are the Gate Criteria for Learn?
- ⚠ Policy updates drafted and queued for approval
- ⚠ Reusable patterns captured in pattern library
- ⚠ Benchmark targets updated based on evaluation data
- ⚠ Scaling decisions documented with business case
- ⚠ Retirement or redesign decisions recorded for underperforming systems
- ⚠ Continuous improvement backlog updated and prioritized
- ⚠ Gate L review passed — cycle handoff to next Calibrate
Related Articles (527)
Articles from the Body of Knowledge that are tagged to the Learn stage or are lifecycle-wide and apply here.
- M1.1The AI Transformation Imperative
- M1.1What Data Readiness Is (and What It Is Not)
- M1.1The Enterprise AI Reference Architecture
- M1.1The LLM Risk Surface
- M1.1Defining AI Transformation vs. AI Adoption
- M1.1Data Quality Dimensions Extended for AI
- M1.1Model Selection Decision Framework
- M1.1Prompt Injection and Jailbreak Mitigation
- M1.1The Enterprise AI Maturity Spectrum
- M1.1Data Governance and Data Contracts
- M1.1Prompt Architecture: Templates, Versioning, Injection Defense
- M1.1Hallucination, Grounding, and Output Integrity
- M1.1Introduction to the COMPEL Framework
- M1.1Data Lineage, Provenance, and Documentation
- M1.1Retrieval-Augmented Generation: When, Why, How Much
- M1.1Guardrails and Content Safety Architecture
- M1.1The Four Pillars of AI Transformation
- M1.1Labeling Strategy and Annotation Governance
- M1.1Chunking and Embedding Strategy
- M1.1Evaluation, Red-Teaming, and Monitoring
- M1.1AI Transformation Anti-Patterns
- M1.1Feature Stores and Vector Stores as Governance Artifacts
- M1.1Vector Stores: Selection, Hybrid Retrieval, and Reranking
- M1.1Regulatory Obligations and Incident Response
Which Knowledge Domains Apply to Learn?
- AI Use Case Management321 articles
- AI Project Delivery312 articles
- AI Strategy and Alignment140 articles
- AI Leadership and Sponsorship140 articles
- AI Governance Structure120 articles
- Change Management Capability110 articles
- AI Literacy and Culture110 articles
- Regulatory Compliance109 articles