Behind the Paper

Behind the Paper: Why I Wrote From Assistants to Agents

Most of today’s AI debate swings between two familiar pictures: AI as a tool, and AI as an autonomous actor. This article began from a different question: what happens when action is distributed across humans, AI systems, and institutional structures of delegation, supervision, and responsibility?

When we talk about artificial intelligence, much of the debate still oscillates between two familiar images. On one side, AI is treated as a sophisticated tool: something that supports human action but remains external to responsibility. On the other side, AI is sometimes described in near-anthropomorphic terms, as if it were becoming an autonomous actor in its own right. Both views, in different ways, seemed incomplete to me.

What increasingly interested me was a more difficult question: what happens when AI is neither merely assistive nor fully autonomous, but instead operates within structured systems of delegation, supervision, and institutional responsibility? In other words, what happens when action is distributed across humans, artificial systems, and governance structures?

That question became the starting point for my article, From Assistants to Agents: A Relational Framework for Human–AI Co-Agency, now published in AI and Ethics.

The central intuition behind the article is simple: the rise of agentic AI requires us to rethink agency relationally. In many real-world settings, AI does not act in isolation. Its apparent initiative is enabled, bounded, and interpreted through human decisions, institutional goals, technical design, and governance mechanisms. This means that responsibility cannot be understood by looking only at the machine, or only at the individual user. It has to be understood through the structure of the broader sociotechnical system.

To make that argument more operational, I proposed a four-dimensional framework built around initiative, decision scope, oversight, and responsibility attribution. These dimensions are intended to help clarify where delegation begins, how far it extends, how supervision is preserved, and how accountability should remain anchored even when systems take on greater initiative.

One reason I felt this mattered is that discussions about AI governance often remain either too abstract or too procedural. We speak in large terms about ethics, trust, regulation, and policy, but we often lack a practical conceptual bridge between theory and institutional design. I wanted this work to contribute to that middle layer: not only asking what AI is, but how human–AI arrangements should be evaluated when decisions, actions, and consequences are distributed.

I was also concerned that the vocabulary of “autonomy” can sometimes obscure more than it reveals. The issue is not simply whether a system appears autonomous. The more important question is whether institutions remain capable of structuring delegation responsibly. A system may appear highly capable while still depending on weak governance. Conversely, human accountability can be eroded not because humans disappear, but because roles, boundaries, and responsibility structures become blurred.

In that sense, the challenge posed by agentic AI is not machine agency in isolation. It is the alignment of delegated action with meaningful human oversight and institutional responsibility. More broadly, I hope this work contributes to a more grounded and rigorous conversation about how societies, organizations, and governance systems should respond as AI becomes more agentic.

Read the article
SharedIt: https://rdcu.be/fgHrc
DOI: https://doi.org/10.1007/s43681-026-01111-5