TL;RL: Agency isn’t an on/off switch or a high score; it’s a multi-dimensional profile that emerges from relationships, not circuits.
AI handbooks often define ‘agency’ as a neat five-step loop: perceive, plan, act, learn, repeat. It’s tidy. It’s useful. But is that all agency is?
A Google engineer, Antonio Gulli, recently released a preview of his book “Agentic Design Patterns: Building Intelligent Systems”.1 (Or it might have sneaked out somehow.) There’s some interesting stuff in there, pretty much what you expect in terms of design patterns etc., but what caught my attention was how Gulli defines an AI agent and what that says about current AI capabilities.
An AI agent is a system designed to perceive its environment and take actions to achieve a specific goal […] It follows a simple, five-step loop to get things done: Get the Mission […], Scan the Scene […], Think It Through […], Take Action, Learn […] and Get Better […].2
This definition isn’t wrong—it describes a real and useful class of AI systems. But as Mark Liberman notes, “agentic” computer systems have existed essentially as long as computers themselves3—from inventory management systems that automatically reorder stock to the Agent Based Models that, 50 years ago, could simulate cultural emergence using entirely unintelligent components. By reducing agency to a five-step optimisation loop, we’re not just creating a limited and brittle definition. We’re falling into conceptual traps that may prevent us from understanding what agency actually is. This is especially true as we move from thinking about individual AI systems to understanding how they interact with us.
The False Dichotomy Problem
Here’s the first trap: we think of agency as an on/off switch. The most immediate issue is how this approach reinforces a false dichotomy: either a system follows this loop (and “has agency”) or it doesn’t (and is just a “tool”). This binary thinking forces us into unproductive debates: Is GPT-4 an agent? What about a thermostat? A forest ecosystem? A human child who can’t yet plan effectively?
Where do you draw the line in your own mind—between ‘just a tool’ and ‘something with agency’?
These questions feel important but lead nowhere because they assume agency is something you either possess or lack, like having a driver’s license. But what if agency isn’t something you either possess or lack? What if it’s more like intelligence or creativity—a property that manifests in multiple, distinct ways rather than as a single binary state?”
The Single-Score Trap
Even when we move beyond binary thinking, we often fall into what I call the “single-score trap”—treating agency as a scalar quantity where more is always better. This leads to ranking systems: humans have “higher agency” than animals, which have “higher agency” than plants, which have “higher agency” than machines.
But this scalar view misses crucial distinctions. A chess algorithm might excel at long-term planning within its domain while being completely inflexible about goals. A child might be highly creative and adaptive while struggling with impulse control, meaning it would score ‘high’ on one dimension of agency and ‘low’ on another, rendering a single score meaningless. A forest ecosystem might exhibit sophisticated collective decision-making across decades while having no individual locus of control at all.
If we insist on a single agency score, we lose these nuanced differences. It’s like trying to rate both a jazz musician and a chess engine on the same ‘skill scale’—it collapses two very different kinds of ability into one meaningless number. Worse, we typically end up using human adult cognition as the implicit gold standard, making everything else seem like a deficient approximation.
Toward Multidimensional Agency
So how else might we think about it? One option is to treat agency as a profile, not a scorecard. Agency as emerging from multiple, semi-independent dimensions? Rather than asking “how much agency does X have?”, we could ask “what’s X’s agency profile across different dimensions?”
Consider a few potential dimensions.
Creativity: Can the system generate genuinely novel solutions, or does it recombine existing patterns? Margaret Boden’s distinction between combinatorial, exploratory, and transformational creativity4 might map onto different forms of agency. In other words, some systems remix what’s already there, while others can break entirely new ground. Gulli’s scheduling AI operates in combinatorial mode—finding new arrangements of existing elements—but couldn’t invent entirely new ways of thinking about time management.
Intentionality: Following Michael Bratman’s work on planning agency,5 we might ask how the system forms and maintains intentions over time. This is the difference between reacting in the moment and sticking to a plan. Can it make commitments that structure future behaviour? Can it reconsider and revise its plans? When will it reconsider? The five-step loop captures basic planning but misses the deeper questions of goal ownership and temporal commitment.
Affordances: This dimension points toward something especially interesting. Rather than just responding to existing environmental possibilities, can the system actively enhance the affordances available to itself and others? This relationship is one of reciprocity: a community that fulfils its obligations to the river—by not polluting it and managing its banks with care—finds that the river reciprocates, providing abundance like healthy fish stocks and clean water. Most current AI systems consume affordances without creating new ones.
Exploring the affordances dimension suggests something intriguing: we might need two complementary ways of thinking about agency entirely.
The Affordances Bridge
The affordances dimension suggests something intriguing: we might need two complementary ways of thinking about agency entirely.
One approach looks from the outside-in: observing an agent and mapping its capabilities across various dimensions. This is useful for analysis, comparison, and improvement—understanding what different systems can do and how they might be enhanced.
But another approach might be inside-out: understanding agency as emerging from webs of relationship and mutual influence. Think of a beehive: no single bee ‘has’ agency in the human sense, but together their patterns of coordination create a collective agency. From this perspective, agency isn’t something an individual possesses but something that emerges from patterns of interaction within larger systems. The community’s agency includes its relationship with the river, surrounding geology, and the life it sustains.
Two Models, Multiple Questions
I’m still exploring how these two approaches relate. They seem both complementary and conflicting. The outside-in view helps us understand and improve individual capabilities. The inside-out view helps us understand how agency emerges from relationships and contexts.
But they also tension against each other. If agency is fundamentally relational, does it make sense to map individual capabilities at all? If we focus on individual dimensions, do we miss the emergent properties that arise from interaction?
Some questions I keep tripping over:
Could an AI system have high scores on individual dimensions while being embedded in contexts too impoverished to support meaningful agency?
Conversely, could systems with seemingly low individual capabilities participate in forms of collective agency we don’t yet recognise?
What happens to the concept of agency when it’s distributed between a human and an AI? Does it make sense to talk about “hybrid human-AI agency,” and if so, what does success look like?
How do we design AI systems that enhance rather than degrade the relational contexts from which agency emerges?
Opening the Discussion
Gulli’s five-step definition points toward something real and important. But I suspect it’s just one narrow slice of a much richer phenomenon. If we’re going to build AI systems that enhance rather than diminish human and ecological flourishing, we need more sophisticated ways of thinking about agency itself.
If we get this wrong, we risk designing systems that look agentic on paper but hollow out the contexts that actually make agency meaningful.
What dimensions of agency do you think matter most? Have you observed forms of agency that don’t fit the standard cognitive science models? How do you think about the relationship between individual capabilities and collective emergence?
Flame, praise, or remix below—comments are open. The stakes feel high enough that we need to have these conversations now, before our definitions become too entrenched to revisit.
Gulli, Antonio. Agentic Design Patterns: Building Intelligent Systems. (n.d.) (Notebook LM). https://notebooklm.google.com/notebook/44bc8819-958d-4050-8431-e7efe2dbd16e.
“Chapter 4: What makes an AI system an agent?” in Gulli, Antonio. Agentic Design Patterns: Building Intelligent Systems.
Liberman, Mark. “What Makes an AI System an Agent?” Language Log, September 4, 2024. https://languagelog.ldc.upenn.edu/nll/?p=70892.
pp. 3 “Introduction” in Boden, Margaret A. The Creative Mind: Myths and Mechanisms. 2. ed., Reprint. Routledge, 2005.
Bratman, Michael. Intention, Plans, and Practical Reason. Harvard University Press, 1987.