The roles
A typical AI sustainability program at scale has six named roles.
The executive sponsor is the C-level executive — typically the Chief Sustainability Officer, the Chief Technology Officer, or the Chief AI Officer — who owns the program’s outcomes and represents the program in the executive committee and the board.
The program lead is the named individual who runs the program day-to-day, owns the roadmap, manages the cross-functional dependencies, and reports to the executive sponsor. The program lead typically has a sustainability background, a technical AI background, or both.
The platform engineering lead is the engineering leader responsible for the measurement layer (Article 2), the integration of carbon-aware scheduling into the MLOps platform (Article 9), and the technical-optimization toolchain (Article 4). This role typically reports into the AI platform organization.
The procurement lead is the procurement-organization leader responsible for the sustainable-procurement discipline (Article 14) — vendor-disclosure questionnaires, contractual standards, certification expectations, vendor scorecards.
The ESG-reporting lead is the corporate-ESG-organization leader responsible for the disclosure discipline (Article 12) — annual sustainability report, CDP submission, customer questionnaires, regulatory filings.
The use-case-governance lead is the AI-governance-organization leader responsible for embedding sustainability criteria into use-case selection (Article 13) and into the broader AI governance process. This role typically sits in the AI governance or AI risk function.
The McKinsey State of AI surveys have documented that the most sustainability-mature organizations have these roles named explicitly, with documented accountabilities and clear escalation paths to the executive sponsor.1
The metrics
The program is accountable for a portfolio of metrics organized into four layers.
Operational metrics: per-workload kilowatt-hours, per-workload carbon emissions (location-based and market-based), per-workload water consumption, per-workload operations-per-watt. These are the engineering-team-facing metrics displayed on the internal dashboard.
Program-level metrics: total AI program emissions (Scopes 1, 2, 3); year-on-year emission intensity (per-revenue-unit, per-employee, per-customer); share of AI workloads on renewable-powered infrastructure; share of AI workloads using optimized models; share of AI workloads scheduled with carbon-aware policies. These are the quarterly-review metrics displayed to the program leadership.
Disclosure metrics: completeness of disclosure (percentage of in-scope items disclosed); quality of disclosure (CDP score, FMTI score, third-party-assurance level); frequency of disclosure (cadence of internal and external reporting). These are the disclosure-team-facing metrics.
Outcome metrics: progress against the SBTi-validated targets (or equivalent); progress against the RE100 commitment (or equivalent); customer-facing sustainability claims attributable to the program (e.g., share of customer queries served on renewable-powered infrastructure). These are the executive-team and board-facing metrics.
The targets
The targets that the metrics are measured against are typically expressed across three time horizons.
Annual targets: the year-on-year improvements that the program is committed to in the current fiscal year. Examples: a 15% year-on-year reduction in per-revenue-unit AI emissions; a 10-percentage-point increase in the share of AI workloads on renewable-powered infrastructure; a 5-percentage-point improvement in the per-token inference energy of the largest production model.
Medium-term targets: the trajectory commitments that the program is committed to over a 3-5 year horizon. Examples: 100% of AI workloads on 24/7-matched renewable infrastructure by 2030; 50% of foundation-model inference served by distilled models by 2028; SBTi-aligned trajectory toward absolute emission reduction.
Long-term targets: the strategic commitments that the program is committed to over a 5-10 year horizon. Examples: net-zero AI operations by 2040 (aligned with the Climate Pledge); industry leadership on the FMTI compute-layer scoring; contribution to industry-wide AI sustainability standards.
The targets are aligned with the corporate-level sustainability commitments — the SBTi targets, the RE100 commitment, the Climate Pledge participation — and are validated against the Paris Agreement’s 1.5°C trajectory.
The governance
The governance apparatus that holds the program accountable is structured at four levels.
Board-level oversight: the board’s audit-and-risk committee or an equivalent body reviews the program’s annual sustainability disclosure, approves material methodology changes, and oversees the program’s strategic trajectory.
Executive-level accountability: the executive sponsor reports quarterly to the executive committee on the program’s progress against targets, identifies the actions required to close any gap, and secures the investment required.
Management-level operational responsibility: the program lead runs the monthly operating cadence, manages the cross-functional dependencies, and escalates issues to the executive sponsor as needed.
Engineering-level day-to-day execution: the platform engineering lead, the procurement lead, the ESG-reporting lead, and the use-case-governance lead each run their respective workstreams in accordance with the program roadmap.
The staged maturity roadmap
The COMPEL D19 maturity rubric defines five levels of maturity, and the staged roadmap operationalizes the climb from any starting level to the next.2
Level 1 to Level 2 (Foundational to Developing): bootstrap the measurement layer. The first six months. Pull cloud-provider sustainability data; instrument the largest training runs; produce the first carbon-footprint estimate; mention sustainability in the AI governance policy. The program is not yet a program but a project.
Level 2 to Level 3 (Developing to Defined): institutionalize continuous measurement. The next twelve months. Extend per-workload tracking to all production AI systems; integrate measurement into the MLOps platform; embed sustainability criteria into model-selection checklists; include performance-per-watt in model cards; calculate carbon footprint with provider-specific emission factors. The program now exists as a named, resourced construct.
Level 3 to Level 4 (Defined to Advanced): standardize optimization and integrate disclosure. The next twelve to twenty-four months. Make optimization (distillation, pruning, quantization) standard practice; institutionalize carbon-aware scheduling; set organization-wide AI sustainability targets; include AI metrics in ESG reports; meet GPAI energy-reporting requirements where applicable.
Level 4 to Level 5 (Advanced to Transformational): publish, contribute, and lead. The next twenty-four to thirty-six months. Publish a transparent AI sustainability report with methodology; achieve high scores on external benchmarks (CDP, FMTI); contribute to industry standards for AI environmental reporting; deploy AI to address external sustainability challenges; treat AI sustainability as a competitive advantage.
Summary
An AI sustainability program is the named, resourced, governed, and time-bounded construct that turns the preceding fourteen articles’ practices into an institutional discipline. The roles — executive sponsor, program lead, platform engineering lead, procurement lead, ESG-reporting lead, use-case-governance lead — own each layer of the program. The metrics are organized into operational, program-level, disclosure, and outcome layers. The targets span annual, medium-term, and long-term horizons aligned with the corporate-level commitments and the Paris Agreement trajectory. The governance is structured at board, executive, management, and engineering levels. The staged maturity roadmap takes the organization from Level 1 (no measurement) to Level 5 (industry leadership) over a four-to-five-year arc.
The COMPEL Body of Knowledge Module 1.9 closes here. The foundational practitioner who has worked through these fifteen articles has the vocabulary, the methodology, and the program-design framework to launch and operate an AI sustainability program at any starting level of maturity, and to defend the program’s design and outcomes to the regulators, investors, customers, and internal stakeholders who increasingly expect it.
The Greenhouse Gas Protocol provides the technical accounting frame.3 The European Union Corporate Sustainability Reporting Directive (CSRD) and the European Sustainability Reporting Standards (ESRS) provide the disclosure structure for EU-incorporated organizations.4 The European Union AI Act Article 95 voluntary code of conduct on sustainability provides the AI-specific regulatory framing.5 The Stanford Foundation Model Transparency Index (FMTI) provides the external benchmarking.6 The Green Software Foundation principles provide the engineering-practice framing.7 The International Energy Agency Electricity 2024 report provides the macro context.8 The Organisation for Economic Co-operation and Development (OECD) AI Principles provide the high-level ethical framing within which the entire program operates.9 The Hugging Face AI Energy Score and the CodeCarbon library provide the practical instrumentation.1011
© FlowRidge.io — COMPEL AI Transformation Methodology. All rights reserved.
Footnotes
-
McKinsey & Company, “The state of AI.” https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai — accessed 2026-04-26. ↩
-
COMPEL Domain D19 maturity rubric, Levels 1 through 5. See
shared/data/compelDomains.ts. ↩ -
Greenhouse Gas Protocol. https://ghgprotocol.org/ — accessed 2026-04-26. ↩
-
Directive (EU) 2022/2464 on Corporate Sustainability Reporting. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32022L2464 — accessed 2026-04-26. ↩
-
Regulation (EU) 2024/1689 (EU AI Act), Article 95. https://artificialintelligenceact.eu/ — accessed 2026-04-26. ↩
-
Stanford CRFM, “Foundation Model Transparency Index.” https://crfm.stanford.edu/fmti/ — accessed 2026-04-26. ↩
-
Green Software Foundation. https://greensoftware.foundation/ — accessed 2026-04-26. ↩
-
International Energy Agency, “Electricity 2024.” https://www.iea.org/reports/electricity-2024 — accessed 2026-04-26. ↩
-
Organisation for Economic Co-operation and Development, “OECD AI Principles.” https://oecd.ai/en/ai-principles — accessed 2026-04-26. ↩
-
Hugging Face, “AI Energy Score Leaderboard.” https://huggingface.co/spaces/AIEnergyScore/Leaderboard — accessed 2026-04-26. ↩
-
CodeCarbon, “Track and reduce CO2 emissions from your computing.” https://codecarbon.io/ — accessed 2026-04-26. ↩