Skip to main content
AITF M2.21-Art04 v1.0 Reviewed 2026-04-06 Open Access
M2.21 M2.21
AITF · Foundations

AI Agents: Beyond Single-Turn Interactions

AI Agents: Beyond Single-Turn Interactions — AI Use Case Management — Foundation depth — COMPEL Body of Knowledge.

8 min read Article 4 of 5

This article describes the architectural elements of AI agents, the distinctive risks they introduce, the governance patterns that have emerged for credible agent deployment, and the relationship between agentic AI and the human-AI collaboration patterns of Module 1.30.

Architectural Elements

A typical agent architecture combines several components.

Reasoning Engine

A foundation model that serves as the agent’s reasoning core. The model interprets the goal, plans steps, decides which tools to invoke, interprets tool results, and decides when the goal is accomplished or when escalation is needed.

Tool Set

The actions the agent can take. Tools may include API calls, database queries, code execution sandboxes, web search, file operations, or interactions with downstream systems. The tool set defines the agent’s action surface; tools the agent does not have, it cannot perform.

Memory

State that persists across interactions. Short-term memory (within a conversation), long-term memory (across conversations), and shared memory (across agent instances) each serve different purposes and introduce different governance considerations.

Planning Mechanism

The strategy the agent uses to decompose goals into tool calls. Patterns include reactive (decide next action after seeing previous result), planning-first (decompose the full plan upfront), and hybrid approaches.

Orchestration

The infrastructure that runs the agent loop: invoking the model, executing tools, managing state, handling errors, and stopping when appropriate. Frameworks include LangGraph at https://langchain-ai.github.io/langgraph/, AutoGen, CrewAI, and LlamaIndex Agents.

Observability

Logging of the agent’s reasoning, tool invocations, tool results, and decisions. Observability is the governance precondition for agentic deployment.

Why Agents Are Different from Single-Turn LLMs

Three properties distinguish agentic AI from single-turn LLM use.

Compounding Decisions

A single LLM response is one decision. An agent makes many decisions in sequence — which tool to use, with what parameters, when to stop, when to escalate. Errors compound; the failure surface is much larger than the per-decision quality suggests.

Action in the World

Single-turn LLMs produce text that humans then act on. Agents take actions directly: spending money, modifying records, sending communications, changing configurations. The reversibility considerations of Module 1.30’s collaboration patterns apply with much greater force.

Operating Outside Direct Supervision

Agents often operate over time horizons longer than human supervision can practically cover. A 10-minute agent run that touches a dozen systems is hard to supervise in real-time; a long-running agent that operates for hours or days requires asynchronous oversight patterns.

Distinctive Risks

Compounding Hallucination

If the agent hallucinates at one step, subsequent steps reason on the false premise. The errors propagate and amplify.

Unauthorised Action

Tool sets that are too permissive enable the agent to take actions it should not. The agent’s instruction to “look up customer information” can become the action of “delete customer record” if the tool set permits and the agent reasoning misfires.

Resource Consumption

Agents can consume significant resources: API calls, compute time, downstream system load. A poorly-bounded agent can run up substantial cost before anyone notices.

Side Effect Accumulation

Agents that take many small actions can produce side effects whose aggregate is significant even when each action is innocuous. Auditing and reversing the cumulative effect is harder than auditing a single decision.

Adversarial Manipulation

Agents that take inputs from users or external sources can be manipulated through prompt injection in those inputs. The agent’s tool access amplifies the consequences. The OWASP Top 10 for Large Language Model Applications at https://owasp.org/www-project-top-10-for-large-language-model-applications/ includes specific guidance on agentic risks.

Goal Misalignment

The agent pursues an objective that approximates but does not match the true intent. The classical AI alignment problem manifests at much smaller scales in everyday agents.

The U.S. National Institute of Standards and Technology AI RMF Generative AI Profile at https://airc.nist.gov/AI_RMF_Knowledge_Base/Playbook/GenAI_Profile and emerging agent-specific guidance from the AI Safety Institutes (UK AISI at https://www.aisi.gov.uk/, U.S. AISI at https://www.nist.gov/aisi) catalogue these risks formally.

Governance Patterns

Bounded Autonomy

Agents operate within explicit boundaries: tool allowlists, action value limits, scope restrictions, time limits. The COMPEL Module 1.30 collaboration patterns map onto agent autonomy directly; most production agents operate in Patterns 4 (human reviews after the fact), 5 (human approves before consequential actions), or 3 (agent handles routine; human handles edge).

Tool Sandboxing

Tools that take consequential actions operate in sandboxed environments where the impact is contained or rehearsed before commitment. Database operations might run in transactions that require explicit commit; API calls might run in test environments before production.

Action Approval Workflows

Consequential actions trigger human approval rather than executing autonomously. Approval can be synchronous (the agent waits for human approval before proceeding) or asynchronous (the agent queues actions for batch human review).

Reasoning Transparency

The agent’s reasoning at each step is logged and inspectable. When an outcome is questioned, the audit trail (per Module 1.21) supports investigation.

Resource and Time Bounds

Agents operate with explicit budgets: maximum tool calls, maximum compute time, maximum API spend. Hitting a limit terminates the agent gracefully and surfaces the situation for review.

Termination and Rollback

The agent has well-defined termination conditions and the actions it took are reversible (or at least logged in a form that supports reversal).

Adversarial Testing

Agents undergo red-team testing specifically for prompt injection through their inputs, manipulation of their tool results, and social engineering of their reasoning.

Specific Agent Use Cases

Customer Service Agents

Multi-turn customer service that takes actions (issuing refunds, updating accounts, scheduling appointments). The customer service patterns of Module 1.29 apply with the additional considerations of agent autonomy.

Software Development Agents

Code-writing agents that operate over multiple steps to implement features, fix bugs, or refactor code. The Cursor, Devin, and Claude Code categories illustrate the pattern; the code generation considerations of Module 1.30 apply.

Research and Analysis Agents

Agents that gather information from multiple sources, synthesise it, and produce reports. The risks centre on factual accuracy, source credibility, and citation discipline.

Operations Agents

Agents that operate within enterprise systems for tasks like incident triage, data quality investigation, or routine administrative work.

Trading and Allocation Agents

Agents in financial markets that execute trades within risk limits. The financial services patterns of Module 1.28 apply with intense scrutiny on the autonomy boundary.

The Multi-Agent Question

Some architectures deploy multiple specialised agents that coordinate to accomplish tasks. The governance questions multiply: how do agents authenticate to each other, what evidence is produced, how is overall outcome attributed, and how do failures propagate?

Multi-agent governance is an active research area. Current best practice is to treat multi-agent systems as compositions of single-agent systems, with explicit coordination protocols, audit trails that trace the full multi-agent decision path, and human oversight at the system boundary even if individual agents operate autonomously within.

Common Failure Modes

The first is over-permissive tool sets — the agent has access to tools whose misuse can cause significant harm. Counter with explicit tool allowlists scoped to the minimum necessary.

The second is under-monitored long-running agents — agents that operate for extended periods without checkpoints. Counter with periodic review checkpoints and explicit termination conditions.

The third is invisible cost accumulation — agents that consume resources at rates not anticipated. Counter with hard budget limits and real-time cost monitoring.

The fourth is prompt injection through tool results — adversarial content in retrieved web pages, customer messages, or document content that hijacks the agent. Counter with input sanitisation, isolation, and adversarial testing.

The fifth is agent hallucination cascades — early hallucinations that compound through subsequent steps. Counter with intermediate verification and termination on confidence drop.

Looking Forward

The final article in Module 2.21 turns to agent orchestration frameworks — the platform layer that manages agent execution, observability, and governance at scale.


© FlowRidge.io — COMPEL AI Transformation Methodology. All rights reserved.