Skip to main content
AITF M1.9-Art05 v1.0 Reviewed 2026-04-06 Open Access
M1.9 M1.9
AITF · Foundations

Green Data Center Strategies for AI Workloads

Green Data Center Strategies for AI Workloads — AI Use Case Management — Foundation depth — COMPEL Body of Knowledge.

8 min read Article 5 of 15

The International Energy Agency (IEA) Electricity 2024 report projected that data-center electricity consumption — heavily driven by AI — would more than double between 2022 and 2026, making the facility-level efficiency question one of the most consequential sustainability decisions in the entire AI program.1 This article surveys the four facility-level levers that the foundational practitioner should understand.

Lever 1: Power Usage Effectiveness

Power Usage Effectiveness (PUE) is the ratio of total facility energy to IT-equipment energy. A PUE of 1.0 would be a hypothetical perfect facility in which every kilowatt-hour of electricity was delivered to the IT equipment with zero overhead for cooling, power conversion, lighting, and the like. Real-world facilities have PUE between 1.1 (hyperscale, modern, well-engineered) and 2.0+ (older, on-premise, poorly utilized). The industry-leading hyperscalers consistently report fleet-wide PUE in the range of 1.1 to 1.2 for their newest facilities.

For an AI workload, PUE is a direct multiplier on emissions. A workload that consumes 100 MWh of IT-equipment energy at a PUE-1.5 facility produces 150 MWh of grid demand; the same workload at a PUE-1.2 facility produces 120 MWh — a 20% reduction in net emissions before any inference optimization is applied.

The drivers of low PUE are advanced cooling (hot-aisle/cold-aisle containment, evaporative cooling, liquid cooling for high-density racks), high-efficiency power conversion (high-voltage Direct Current distribution, modern Uninterruptible Power Supply units), and high facility utilization (a half-empty data center has a much worse PUE than a full one because the fixed-overhead share is larger).

The Green Software Foundation has documented that the industry-wide adoption of liquid cooling for AI accelerator racks is among the most significant near-term PUE improvements available, because the high power density of modern accelerator clusters exceeds the practical limits of air cooling.2

Lever 2: Cooling architecture

For AI workloads, cooling is the dominant non-IT energy load. Modern accelerator racks dissipate 30-100+ kilowatts per rack — orders of magnitude higher than the 5-10 kilowatts that air cooling was originally designed for. The cooling architecture choices have become first-order sustainability decisions.

Air cooling with hot-aisle containment is the legacy approach. It works at moderate density and is easy to retrofit but does not scale to modern accelerator densities.

Liquid cooling (direct-to-chip) circulates coolant through cold plates attached directly to the accelerator chips. It scales to the highest densities and dramatically reduces fan energy. It is the dominant new-build choice for AI clusters.

Immersion cooling submerges the entire server in a dielectric coolant. It scales to extreme densities and produces the lowest PUE but requires specialized hardware and operational practices.

Evaporative cooling uses water evaporation to cool the air supplied to the IT equipment. It is highly efficient in dry climates but introduces a water-consumption trade-off that the next article in this module addresses in detail.

Lever 3: Renewable-energy integration

The most consequential per-kilowatt-hour emission-factor decision is the source of the electricity. A facility powered by 100% renewable electricity (via on-site generation, off-site Power Purchase Agreements, or a fully renewable grid) has a near-zero operational emission factor; a facility powered by a coal-heavy grid has an emission factor in the hundreds of grams of CO2 per kilowatt-hour.

The hyperscalers have pursued aggressive renewable-energy procurement for over a decade, with the largest providers reporting 100% renewable electricity matching on an annual basis. The leading edge of practice has moved to 24/7 hourly matching — matching every hour’s consumption with renewable generation in the same hour and grid region, rather than matching annual totals across grids and seasons. The 24/7 matching standard is meaningfully harder to achieve than annual matching but is the standard that the most sustainability-mature programs are converging on.

The Greenhouse Gas Protocol Scope 2 Guidance distinguishes between location-based emission factors (the grid average at the consumption point) and market-based emission factors (the contractual procurement of renewable electricity), and requires both to be reported in parallel.3 An enterprise AI program that procures renewable electricity for its data centers can report a much lower market-based Scope 2 figure than its location-based figure, but should report both transparently.

Lever 4: Waste-heat recovery and grid services

The waste heat from AI accelerator clusters — historically vented to the atmosphere — can be recovered and used for district heating, agricultural greenhouses, or industrial processes. Several European hyperscale facilities now feed waste heat into local district-heating networks, displacing natural-gas combustion that would otherwise have been required to heat homes and offices.

A related practice is grid services: using the data center’s controllable load to provide demand-response services to the grid, reducing or shifting consumption during periods of grid stress in exchange for compensation and grid-stability benefits. AI training workloads — which can be paused or shifted in time without breaking service-level objectives — are particularly well-suited to grid services. The IEA has documented the emerging importance of data centers as grid-services providers.4

Maturity Indicators

The COMPEL D19 maturity rubric does not specify facility-level practices in the same detail as workload-level practices, but the Level 3 (Defined) indicator that “carbon footprint (CO2e) is calculated using provider-specific emission factors” requires the organization to know the PUE and the renewable-energy mix of every facility hosting its workloads.5 The Level 4 (Advanced) indicator that “AI environmental metrics are included in ESG and sustainability reports” requires the organization to report the facility-level figures alongside the workload-level figures. An organization at Level 4 can attribute year-over-year emission reductions to specific facility-level improvements as well as workload-level improvements.

The Organisation for Economic Co-operation and Development (OECD) AI Principles’ framing of sustainability as a shared lifecycle responsibility supports the practitioner’s expectation that facility operators (whether internal infrastructure teams or external cloud providers) document and report their facility-level practices.6

Practical Application

A foundational practitioner who is engaging with the data-center efficiency question should produce four artifacts.

Artifact 1: the facility-inventory document. A document that lists every facility hosting AI workloads, the facility’s reported PUE, its cooling architecture, its renewable-energy mix and procurement model (location-based versus market-based), and its waste-heat recovery practices.

Artifact 2: the per-facility emission-factor table. A table that, for each facility, records the location-based emission factor, the market-based emission factor, and the year-over-year change. The table is the input to all per-workload emission calculations.

Artifact 3: the facility-procurement criteria. The criteria that the organization will apply when selecting new facilities (or new cloud regions) for AI workloads — typically including a maximum PUE, a minimum renewable-energy share, evidence of 24/7 matching where feasible, and a transparent waste-heat strategy.

Artifact 4: the facility-engagement plan. The plan for how the AI program will work with internal infrastructure teams or external cloud providers to advocate for facility-level improvements that affect the AI workloads — including liquid-cooling retrofits, renewable-energy procurement, and grid-services participation.

The European Union Corporate Sustainability Reporting Directive (CSRD) ESRS E1 climate disclosure requires the organization to report year-over-year emission intensity, which makes the facility-level improvements visible to investors and customers.7 The EU AI Act Article 95 voluntary code of conduct on sustainability is expected to encourage providers to publish facility-level efficiency figures alongside their model-level energy figures.8

Summary

Green data center practice is the facility-level lever that determines the per-kilowatt-hour emission factor that all downstream AI optimization is multiplied by. The four levers are Power Usage Effectiveness, cooling architecture, renewable-energy integration, and waste-heat recovery and grid services. Modern hyperscale practice is converging on PUE 1.1-1.2, liquid cooling for high-density racks, 100% annual renewable matching with the leading edge moving to 24/7 hourly matching, and waste-heat recovery for district heating where geographically viable. The COMPEL D19 maturity rubric at Level 3 requires the facility-level emission factors to be known and used; at Level 4 the facility-level figures are reported alongside the workload-level figures. The next article, M1.9Renewable Energy Procurement for AI Infrastructure, develops the procurement decisions that determine the renewable-energy lever in detail.



© FlowRidge.io — COMPEL AI Transformation Methodology. All rights reserved.

Footnotes

  1. International Energy Agency, “Electricity 2024.” https://www.iea.org/reports/electricity-2024 — accessed 2026-04-26.

  2. Green Software Foundation. https://greensoftware.foundation/ — accessed 2026-04-26.

  3. Greenhouse Gas Protocol, “Scope 2 Guidance.” https://ghgprotocol.org/ — accessed 2026-04-26.

  4. International Energy Agency, “Electricity 2024,” section on data centers as flexible grid resources. https://www.iea.org/reports/electricity-2024 — accessed 2026-04-26.

  5. COMPEL Domain D19 maturity rubric, Levels 3 and 4. See shared/data/compelDomains.ts.

  6. Organisation for Economic Co-operation and Development, “OECD AI Principles.” https://oecd.ai/en/ai-principles — accessed 2026-04-26.

  7. Directive (EU) 2022/2464 on Corporate Sustainability Reporting. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32022L2464 — accessed 2026-04-26.

  8. Regulation (EU) 2024/1689 (EU AI Act), Article 95. https://artificialintelligenceact.eu/ — accessed 2026-04-26.