Skip to main content
AITF M1.11-Art08 v1.0 Reviewed 2026-04-06 Open Access
M1.11 M1.11
AITF · Foundations

Stakeholder Engagement in AI Ethics: Affected Communities and Power Dynamics

Stakeholder Engagement in AI Ethics: Affected Communities and Power Dynamics — AI Use Case Management — Foundation depth — COMPEL Body of Knowledge.

8 min read Article 8 of 15

Who Counts as a Stakeholder

The traditional product development stakeholder set — buyers, end users, internal teams — is incomplete for AI ethics. A more complete map identifies five categories.

Direct users. People who interact with the AI system as part of their job or daily activity. A loan officer using a credit scoring tool, a clinician using a diagnostic decision support system, a citizen using a chatbot to access government services.

Subjects of decisions. People about whom the AI system makes consequential decisions, who may never interact with it directly. The loan applicant, the patient, the citizen whose application is auto-routed, the defendant assessed by a recidivism risk score.

Affected non-targets. People affected by the system’s operation even though they are neither operators nor explicit subjects. Communities surveilled by facial recognition deployed in public spaces, neighborhoods affected by predictive policing, employees affected by a hiring system’s screening choices that shape who their future colleagues will be.

Indirect economic stakeholders. Workers whose labor is affected by AI deployment, suppliers and partners whose business models depend on the affected workflow, and competitors whose market position is reshaped.

Civil society and the broader public. Citizens, regulators, and advocacy organizations with general standing to weigh the social effects of widely-deployed AI.

Engagement design begins with explicitly identifying who falls into each category for a given system. Many AI ethics failures trace to systems designed with attention to the first category and inattention to the others.

The Levels of Engagement

The 1969 ladder of citizen participation by Sherry Arnstein remains the foundational reference for understanding engagement quality. Arnstein distinguished eight rungs ranging from manipulation at the bottom through informing, consultation, partnership, and citizen control at the top. Adapted to AI ethics, a five-level model is operationally useful.

Level 1 — Inform. The organization tells affected stakeholders what it has decided. This is communication, not engagement.

Level 2 — Consult. The organization solicits feedback after major decisions are made, retains full control over what to do with the feedback, and may report back what was changed.

Level 3 — Involve. The organization brings stakeholders into the decision process at meaningful points, considers their input alongside other inputs, and explains how their input shaped the outcome (whether or not their preferred outcome prevailed).

Level 4 — Collaborate. The organization shares decision authority with stakeholders for specific decisions, with structures (joint committees, formal voting) that make the shared authority operational.

Level 5 — Empower. The organization delegates decisions to stakeholders, retaining only veto authority for safety or legal reasons.

Most ethics programs operate at Level 1 or 2, calling it engagement. Real engagement begins at Level 3. The choice of level should be deliberate and should match the stakes — Level 1 may be acceptable for low-stakes systems, but high-stakes systems affecting marginalized communities typically require Level 3 or higher.

The OECD AI Principles call for “stakeholder engagement throughout the AI system lifecycle” as part of the inclusive growth principle; see https://oecd.ai/en/ai-principles. The UNESCO Recommendation on the Ethics of AI similarly emphasizes inclusive participation; see https://www.unesco.org/en/artificial-intelligence/recommendation-ethics. Both documents implicitly require engagement above Level 2.

Designing Engagement That Works

Five design principles separate meaningful engagement from box-ticking.

Engage early. Engagement after the use case has been chosen, the model has been built, and the launch date has been set is essentially Level 1. Effective engagement happens at problem framing — what is this system for? — and at scoping — what alternatives have we considered? Late engagement can refine but not redirect.

Resource the engaged. Asking community representatives to participate in detailed technical reviews on a volunteer basis is asking them to subsidize the developer’s risk management. Compensation, technical translation support, and reasonable scheduling are minimum conditions. The Montreal Declaration for Responsible AI was developed through a multi-year participatory process explicitly funded to support participation; see https://montrealdeclaration-responsibleai.com/.

Create durable representation. A one-time community workshop produces episodic input that may or may not survive into the actual decisions. Standing community advisory bodies — with terms, formal reporting paths, and budget — produce engagement that compounds over time and develops the institutional knowledge necessary to engage effectively with technical detail.

Document influence. The single most diagnostic question about an engagement process is: which decisions were changed because of stakeholder input? An engagement process whose answer is “none” is performative. A process that can identify specific decisions, specific inputs, and specific changes has demonstrated real influence.

Close the loop. Stakeholders who provide input and never hear what happened to it disengage, often permanently. Reporting back — what was heard, what was decided, why — converts a one-time interaction into a sustainable relationship.

Power Dynamics

Engagement design that ignores power dynamics will systematically privilege already-powerful voices. Three asymmetries deserve specific attention.

Information asymmetry. Developers know what the system does and what alternatives exist; affected stakeholders typically do not. Effective engagement requires making the technical context accessible — through plain-language briefings, examples, and dedicated translation work — without dumbing down to the point that meaningful input becomes impossible.

Resource asymmetry. Corporate developers can deploy paid staff, dedicated time, and professional facilitation; community representatives often participate in their personal time on top of jobs and family responsibilities. Engagement that fails to compensate this asymmetry will produce input only from those who can afford to participate, which is rarely a representative cross-section of the affected population.

Outcome asymmetry. Developers face limited consequences from a deployment they later regret (they can withdraw the product); affected communities face consequences they cannot reverse (a wrongful arrest, a denied loan, a missed diagnosis). Asking stakeholders to weigh in on a decision whose downside they will bear and whose upside accrues elsewhere is asking them to subsidize the developer’s risk-taking.

Mitigations include independent facilitation by parties with no commercial stake, financial compensation that respects the value of stakeholders’ time, and decision rules that give weight to the input of those who will bear the most consequence.

Specific Practices

Several concrete practices have track records in AI ethics engagement.

Community advisory boards. Standing bodies of representatives from affected communities that meet regularly, review proposed and deployed systems, and provide input that the ethics review board (Article 7) is required to consider.

Participatory design workshops. Structured sessions in which affected stakeholders co-design the system’s behavior — what it should consider, what it should refuse to do, what user controls it should expose.

Citizen panels and citizen juries. Time-bounded deliberative bodies of randomly selected community members, briefed on the technical context, and asked to make a recommendation on a specific decision. The format has been used effectively for public-sector AI deployments in several jurisdictions.

Public consultation processes. Open comment periods on proposed AI systems, with structured review of submissions and published responses. Public consultation is most effective when paired with one of the smaller-format mechanisms above to ensure that less-resourced voices have a venue.

Algorithmic impact assessments with stakeholder review. Impact assessments (the subject of much of the EU AI Act and the proposed US Algorithmic Accountability Act, see https://www.congress.gov/bill/118th-congress/house-bill/5628) include stakeholder input as a required component, with documented review of how that input shaped the assessment’s findings.

The Partnership on AI provides convening services and templates for several of these formats; see https://partnershiponai.org/. The World Economic Forum has published case studies on participatory AI governance; see https://www.weforum.org/topics/artificial-intelligence-and-machine-learning.

Maturity Indicators

  • Level 1: No structured stakeholder engagement; ethics review is internal-only.
  • Level 2: Ad-hoc engagement on selected projects, typically Level 1–2 on the engagement ladder.
  • Level 3: Standing engagement structures (community advisory bodies, citizen panels) for high-stakes systems; engagement begins at problem framing.
  • Level 4: Engagement is funded, documented, and demonstrably influences decisions; specific decision changes are attributable to specific stakeholder input.
  • Level 5: Engagement quality is reported externally; the organization shares engagement methodologies with peers; affected communities are explicit about their relationship to the organization’s AI program.

Practical Application

Three first actions. First, for the highest-stakes deployed AI system, identify the five stakeholder categories listed at the start of this article and assess current engagement with each. The gaps in the assessment are the engagement program’s first targets. Second, fund a single community advisory body with explicit charter, compensation, and connection to the ethics review board. Pilot it with one system and learn before scaling. Third, instrument the use-case intake form (Article 14) so that every AI proposal must identify affected stakeholders, the proposed engagement level for each, and the rationale for that level.

Looking Ahead

Article 9 turns to the highest-stakes domains where AI ethics is most contested — hiring, lending, healthcare, and justice — and examines the recurring patterns and specific safeguards that distinguish responsible from irresponsible deployment in each.


© FlowRidge.io — COMPEL AI Transformation Methodology. All rights reserved.