This article translates that framing into the enterprise practitioner’s decision discipline. It establishes that the performance-versus-energy trade-off is an explicit decision rather than a default, identifies the ethical considerations that the decision should account for, and provides a decision framework that the practitioner can apply consistently across use cases.
The trade-off is real
Empirical evidence across modeling tasks shows that performance improvement is sub-linear in compute spend. Doubling the compute spend on a use case typically produces a single-digit-percentage accuracy improvement. The implication is that incremental performance is increasingly expensive — both financially and environmentally. The Schwartz et al. analysis quantified this trend across multiple model families and demonstrated that the highest-accuracy points on most leaderboards were at compute costs orders of magnitude higher than the points within a few percentage points of the leaders.1
The Hugging Face AI Energy Score leaderboard makes the trade-off concretely visible at the per-model level: for many enterprise tasks, a 7-billion-parameter model achieves accuracy within 5% of a 70-billion-parameter model at one-tenth the inference energy.2 The 5-percentage-point accuracy delta is real and matters for some use cases; for many other use cases, the delta is below the noise floor of the use case’s actual decision criteria, and the choice of the larger model is therefore an unjustified energy expenditure.
The ethical considerations
The trade-off is ethical because the marginal energy cost is externalized while the marginal performance benefit is internalized. The practitioner who defaults to the highest-performing available model — without documenting why the marginal performance is needed — is making an ethical choice that the organization has not deliberately taken.
Several specific ethical considerations follow.
The proportionality consideration: the energy expenditure should be proportionate to the use case’s actual decision criteria. A use case that determines a high-stakes medical or legal outcome may justify a larger model and the associated energy cost; a use case that produces a marketing-content draft may not. The proportionality assessment is itself an ethical decision.
The opportunity-cost consideration: the energy that one use case consumes is energy that is unavailable for other uses, including other AI use cases that may produce more societal value per kilowatt-hour. The proliferation of low-value high-energy AI deployments (the “AI feature added because we could” pattern) is a collective ethical problem even if no individual deployment is unjustified in isolation.
The intergenerational consideration: emissions today produce climate impact for decades; energy expenditure today on AI is energy that future generations will not have available for their priorities. The intergenerational frame is the foundation of the broader sustainability ethic and applies to AI as much as to any other resource-consuming activity.
The distributional consideration: the energy and water cost of AI is borne disproportionately by communities near data centers — typically not the same communities that consume the AI services. The distributional dimension of the externality is itself an ethical consideration that the proportionality assessment should account for.
The Organisation for Economic Co-operation and Development (OECD) AI Principles include sustainability as a value-based principle that AI actors should respect across the AI lifecycle, providing the high-level ethical framing within which the practitioner’s decision discipline sits.3
The decision discipline
The practitioner’s decision discipline should make the trade-off explicit at three points.
Point 1: at use-case selection. When a new use case is approved, the approval should include an explicit assessment of the use case’s value justification — what decisions the use case will support, what value those decisions will create, and what energy expenditure is therefore proportionate. The assessment is the input to the model-selection discipline that Article 3 of this module developed.
Point 2: at model selection. When a model is selected for a use case, the selection should record the smallest passing model (per the discipline in Article 3), the selected model, the accuracy delta between them, and — if the selection is not the smallest passing model — the explicit justification for the larger model. The justification is auditable and is reviewed periodically.
Point 3: at operational review. Periodically (typically annually), every production AI system is reviewed against the same proportionality criterion. Systems whose actual usage and value have not justified the model and the energy expenditure are candidates for downsizing or retirement.
Maturity Indicators
The COMPEL D19 maturity rubric does not name the proportionality discipline explicitly but the rubric’s progression embeds it. At Level 3 (Defined), “sustainability criteria are included in model selection and deployment checklists” — the proportionality assessment is the substantive content of those criteria.4 At Level 4 (Advanced), “model efficiency optimization is standard practice” — the optimization is the response to the proportionality assessment’s identification of efficiency gaps. At Level 5 (Transformational), the organization’s external disclosure includes the ethical framing alongside the technical figures.
The McKinsey State of AI surveys have documented that the most sustainability-mature organizations are increasingly framing their AI program decisions in proportionality and ethical terms in their public communications, rather than only in technical-optimization terms.5
Practical Application
A foundational practitioner who is institutionalizing the proportionality discipline should produce four artifacts.
Artifact 1: the use-case-justification template. A template that, for every new use case, captures the decisions the use case will support, the value those decisions will create, the energy expenditure proportionate to the value, and the explicit acknowledgement that the use case has been assessed under the proportionality criterion.
Artifact 2: the model-selection-justification template. A template that, for every model selection, captures the smallest passing model, the selected model, the accuracy delta, and the explicit justification for any selection that exceeds the smallest passing model.
Artifact 3: the annual operational-review process. A process that, annually, reviews every production AI system against the proportionality criterion and produces a prioritized backlog of downsizing, optimization, or retirement actions.
Artifact 4: the ethical-framing narrative. A narrative — typically in the AI sustainability disclosure and in the customer-facing communications — that explains the organization’s proportionality discipline, its decision criteria, and its trajectory toward higher proportionality over time.
The European Union AI Act Article 95 voluntary code of conduct on sustainability is expected to encourage providers to articulate the proportionality framing in their public disclosures.6 The Stanford Foundation Model Transparency Index (FMTI) compute-layer scoring is increasingly recognizing the disclosure of model-selection rationale as a transparency indicator.7 The Green Software Foundation principles support the proportionality discipline as a foundation of green-software practice.8 The Greenhouse Gas Protocol provides the technical accounting that makes the proportionality assessment quantifiable.9 The International Energy Agency Electricity 2024 report’s projections of the macro consequences of AI energy growth provide the scale context within which the practitioner’s individual proportionality decisions accumulate.10
Summary
Every AI system-design decision trades performance against energy consumption, and the trade-off is increasingly ethical rather than only technical because the marginal energy cost is externalized while the marginal performance benefit is internalized. The Schwartz et al. Green AI framing identified the structural problem; the enterprise practitioner translates it into a decision discipline that makes the trade-off explicit at use-case selection, at model selection, and at operational review. The four artifacts — use-case-justification template, model-selection-justification template, annual operational review process, ethical-framing narrative — institutionalize the discipline. The COMPEL D19 maturity rubric embeds the proportionality discipline at Levels 3, 4, and 5. The next article, M1.9Sustainable Procurement: Vendor Energy Transparency and Standards, develops the procurement-side practices that extend the proportionality discipline to the AI supply chain.
© FlowRidge.io — COMPEL AI Transformation Methodology. All rights reserved.
Footnotes
-
Schwartz, R. et al. “Green AI” — empirical analysis section. https://cacm.acm.org/research/green-ai/ — accessed 2026-04-26. ↩
-
Hugging Face, “AI Energy Score Leaderboard.” https://huggingface.co/spaces/AIEnergyScore/Leaderboard — accessed 2026-04-26. ↩
-
Organisation for Economic Co-operation and Development, “OECD AI Principles.” https://oecd.ai/en/ai-principles — accessed 2026-04-26. ↩
-
COMPEL Domain D19 maturity rubric, Levels 3 through 5. See
shared/data/compelDomains.ts. ↩ -
McKinsey & Company, “The state of AI.” https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai — accessed 2026-04-26. ↩
-
Regulation (EU) 2024/1689 (EU AI Act), Article 95. https://artificialintelligenceact.eu/ — accessed 2026-04-26. ↩
-
Stanford CRFM, “Foundation Model Transparency Index.” https://crfm.stanford.edu/fmti/ — accessed 2026-04-26. ↩
-
Green Software Foundation. https://greensoftware.foundation/ — accessed 2026-04-26. ↩
-
Greenhouse Gas Protocol. https://ghgprotocol.org/ — accessed 2026-04-26. ↩
-
International Energy Agency, “Electricity 2024.” https://www.iea.org/reports/electricity-2024 — accessed 2026-04-26. ↩