This article describes the canonical human-AI collaboration patterns, the conditions under which each pattern is appropriate, the design considerations that make each pattern work, and the operational practices that prevent the most common collaboration failures.
The Pattern Spectrum
Six patterns recur across AI deployments, ordered roughly by AI autonomy.
Pattern 1: AI as Reference
The AI provides background information that the human consults at their discretion. The human does the work; the AI is a smart reference. Examples: a GenAI tool that summarises company policies on demand, or an AI search system the human queries when needed. Lowest stakes; lowest implementation complexity.
Pattern 2: AI as Suggestion
The AI proposes actions, content, or recommendations; the human decides whether to accept, modify, or reject. Examples: GenAI suggesting email drafts, recommendation systems suggesting products, code completion suggesting next lines. The vast majority of current Generative AI deployments operate in this pattern.
Pattern 3: AI as Filter
The AI handles routine cases autonomously; humans handle the cases the AI flags as uncertain or as outside its scope. Examples: spam filtering, fraud detection systems that auto-approve clear cases and route ambiguous ones to humans, medical AI that triages cases for radiologist attention.
Pattern 4: AI as Reviewer
The human acts; the AI reviews and surfaces concerns. Examples: AI checking documents for compliance issues, AI checking code for security vulnerabilities, AI reviewing job descriptions for biased language.
Pattern 5: AI as Actor with Human Approval
The AI proposes specific actions and may execute them after human approval, often through pre-defined approval rules. Examples: agentic AI that proposes a customer refund and executes after human approval, infrastructure AI that proposes a configuration change pending review.
Pattern 6: AI as Autonomous Actor
The AI acts independently within defined boundaries; humans monitor aggregate behaviour and intervene only on exception. Examples: algorithmic trading within risk limits, fully automated content moderation in defined categories, autonomous vehicles in defined operational design domains.
The U.S. Department of Defense Directive 3000.09 on Autonomy in Weapon Systems at https://www.esd.whs.mil/Portals/54/Documents/DD/issuances/dodd/300009p.pdf provides one of the most rigorous public articulations of the autonomy spectrum, with terminology that has influenced civilian AI policy.
Choosing the Pattern
The choice of pattern depends on several factors.
Stakes per Decision
Higher per-decision stakes warrant patterns with more human involvement. A loan denial that materially affects a person’s life warrants Pattern 2 or 3 (AI suggestion or filter with human review of consequential decisions). A spam classification of a marketing email warrants Pattern 6 (autonomous).
Reversibility
Reversible decisions tolerate more autonomy than irreversible ones. A reversible recommendation that a customer can ignore tolerates Pattern 6. An irreversible action (destroying data, terminating an account, sending a public communication) warrants higher human involvement.
Volume and Latency
Very high volume or very low latency requirements push toward more autonomous patterns simply because human review cannot scale to the workload. Algorithmic trading and content moderation operate in Pattern 6 partly for this reason.
Regulatory Constraint
Several jurisdictions and use cases mandate specific patterns. The EU AI Act Article 14 at https://artificialintelligenceact.eu/article/14/ requires human oversight for high-risk systems with characteristics that effectively mandate Patterns 2 or 3 in most cases. The General Data Protection Regulation Article 22 right not to be subject to solely automated decisions in many cases pushes toward human-in-the-loop patterns.
Trust and Maturity
New deployments typically start with more human-involved patterns and migrate toward more autonomous patterns as evidence of reliable operation accumulates. The pattern progression should be deliberate, with explicit criteria for advancement.
Design Considerations Per Pattern
Each pattern has design considerations that determine whether it works.
Pattern 2 (AI as Suggestion) Design
The suggestion must be presented in a way that supports critical evaluation: confidence indication, source citation, alternative options, and an obvious path to ignore or modify. Suggestions presented as defaults with high friction to override slide toward Pattern 5 or 6 in practice.
Pattern 3 (AI as Filter) Design
The flagging logic must be calibrated: too sensitive and humans drown in false positives; too lax and important cases pass through silently. The flagging logic itself requires monitoring and adjustment.
Pattern 5 (AI as Actor with Approval) Design
The approval interface must enable meaningful review. Approval interfaces that present 50 actions per screen for batch approval rapidly degrade into rubber stamping. Patterns that surface single actions with full context, with batch actions for clearly safe categories, work better.
Pattern 6 (Autonomous Actor) Design
The boundaries must be enforceable, the monitoring must be effective, and the intervention path must be timely. A purportedly autonomous AI without these is actually an AI without oversight, which is unacceptable for any consequential application.
The Automation Bias Problem
A persistent failure mode across human-AI collaboration is automation bias: humans defer to AI recommendations even when their own judgement should override. The phenomenon is well-documented in aviation, healthcare, and increasingly across AI deployments. The U.S. Federal Aviation Administration human factors literature at https://www.faa.gov/regulations_policies/handbooks_manuals/aviation/ describes the dynamics in safety-critical domains.
Several design and operational practices reduce automation bias.
Confidence calibration. AI systems that are well-calibrated (their confidence matches their actual accuracy) help humans appropriately trust or distrust outputs. Poorly calibrated systems produce overconfident wrong outputs that humans accept.
Disagreement surfacing. When the AI’s recommendation conflicts with a likely human judgement (based on prior decisions, business rules, or anomalous inputs), the system should surface the disagreement explicitly.
Override-easy design. Overriding the AI should be as easy as accepting it, not require additional clicks, justifications, or workflow steps.
Override audit and feedback. Override patterns should be analysed: when humans override, why, and were they right? The feedback informs both AI improvement and human training.
Periodic AI-free practice. Periodic exercises in which humans complete the work without AI assistance keep the human skill alive and reveal where AI has masked skill atrophy.
Accountability Allocation
Different collaboration patterns produce different accountability allocations.
In Patterns 1 and 2, accountability is clearly with the human; the AI is a tool.
In Patterns 3 and 4, accountability is shared; the AI’s contribution to the outcome must be evaluable.
In Patterns 5 and 6, accountability becomes more complex. Even when the AI acts, the humans who designed, deployed, and oversee the AI bear accountability. The deploying organisation typically bears overall accountability regardless of pattern.
The European Commission Directive on AI Liability proposal at https://commission.europa.eu/business-economy-euro/doing-business-eu/contract-rules/digital-contracts/liability-rules-artificial-intelligence_en attempts to codify accountability frameworks for AI-influenced decisions.
Operational Practices
Pattern Documentation
Each AI deployment documents its collaboration pattern explicitly: which pattern, what triggers movement between patterns (for example, low-confidence cases moving from Pattern 6 to Pattern 3), and what the boundaries are.
Periodic Pattern Review
The chosen pattern is reviewed at least annually. Patterns that worked at deployment may no longer be appropriate as data, regulation, or organisational maturity changes.
Override Analytics
Where humans override AI, the patterns are analysed. Overrides cluster by user, by case type, or by time of day in ways that often reveal AI improvement opportunities or human training needs.
Onboarding for the Specific Pattern
Users of an AI system are trained on the specific pattern of their deployment, not generic AI literacy. The human’s role in Pattern 2 is materially different from their role in Pattern 5.
Pattern Migration Discipline
Movement from one pattern to another (typically toward more AI autonomy) is treated as a significant change requiring re-evaluation, re-training, and re-approval.
Common Failure Modes
The first is pattern drift — the deployment is documented as Pattern 2 but operates as Pattern 5 because users habitually accept all AI suggestions. Counter with override rate monitoring.
The second is false autonomy — the deployment is documented as Pattern 6 but cannot be effectively monitored, so the autonomy is unsupervised. Counter with monitoring infrastructure verification.
The third is unclear authority in shared patterns — Patterns 3 and 4 with ambiguous human authority that produces inconsistent decisions across humans handling similar cases. Counter with explicit decision authority and calibration.
The fourth is over-investment in oversight that doesn’t catch the error type that occurs — heavy human review of cases the AI usually handles well, while the catastrophic AI failures slip through unnoticed. Counter with risk-tier-aware oversight design.
Looking Forward
Module 1.30 closes here. The next M2 modules will turn to advanced topics in agent governance, generative AI patterns, and cross-cutting capabilities. The collaboration patterns of this article underpin every subsequent deployment decision; choosing them deliberately is the foundation of credible AI operation.
© FlowRidge.io — COMPEL AI Transformation Methodology. All rights reserved.