Behind the Paper: When AI shapes our choices without us noticing

We often believe we are fully in control of our choices when we use digital tools. This paper began with a simple doubt: what if that feeling of control is increasingly an illusion?
Like

Share this post

Choose a social network to share with, or copy the URL to share elsewhere

This is a representation of how your post may appear on social media. The actual post will vary between social networks

Artificial intelligence is usually presented as a neutral assistant. It recommends, sorts, and supports our decisions, while we remain the final decision-makers. At least, that is the story we tell ourselves.

But while working across AI governance and ethics, I kept noticing a recurring pattern. People were technically free to choose, yet their behavior was becoming remarkably predictable. Platforms seemed to know what we would click, buy, or believe before we did. Nothing was forced. No options were removed. And still, outcomes felt guided.

That contradiction is what pushed me to write this paper.

The idea did not come from a single experiment or dataset. It emerged gradually from reading behavioral economics, philosophy, and human–AI interaction research, and from everyday observations of how we interact with digital systems. News feeds rank what we see. Recommenders filter what we consider. Interfaces highlight certain options and hide others. Over time, I realized these systems are not just helping us decide. They are shaping the environment in which our decisions are formed.

And when the environment shapes the decision, autonomy becomes more complicated.

This led me to formulate what I call the autonomy paradox. We feel more autonomous than ever because we are constantly offered personalized choices. Yet those same personalization mechanisms subtly steer us. The result is a growing gap between experienced autonomy and actual control over how our preferences are constructed.

One of the most interesting insights during the writing process was that autonomy has never simply meant “having options.” Philosophers have long argued that real autonomy depends on meaningful alternatives and reflective self-direction. If AI systems structure which alternatives we even notice, then they influence us at a deeper level than we typically acknowledge.

What makes this ethically challenging is that nothing looks manipulative on the surface. Recommendations feel helpful. Defaults feel convenient. Rankings feel rational. Because the influence is invisible, we continue to attribute decisions entirely to ourselves. Responsibility stays with the user, even when the cognitive pathway has been heavily curated by design.

I wanted this article to connect these dots across disciplines. Rather than focusing on dramatic scenarios like automation replacing humans or explicit coercion, I focused on a quieter transformation. The everyday design of digital environments may be reshaping agency itself.

This question matters beyond theory. It affects politics, commerce, education, healthcare, and public policy. In all these areas, AI systems increasingly guide judgments while humans remain formally accountable for the outcomes. If we misunderstand how influence works, we risk holding people responsible for choices that were subtly engineered.

Writing this piece was therefore an attempt to name a problem that many of us sense but struggle to describe. By articulating the autonomy paradox, I hope to encourage more careful thinking about how systems are designed, governed, and evaluated. Preserving autonomy in the age of AI may require more than transparency or disclosure. It may require rethinking how much invisible steering we consider acceptable in the first place.

Sometimes ethical risks are not loud or catastrophic. Sometimes they are quiet shifts that accumulate slowly. Autonomy may not disappear overnight. It may simply be nudged, one recommendation at a time.

Please sign in or register for FREE

If you are a registered user on Research Communities by Springer Nature, please sign in

Follow the Topic

Philosophy of Artificial Intelligence
Humanities and Social Sciences > Philosophy > Philosophy of Science > Philosophy of Technology > Philosophy of Artificial Intelligence
Artificial Intelligence
Mathematics and Computing > Computer Science > Artificial Intelligence
Moral Philosophy and Applied Ethics
Humanities and Social Sciences > Philosophy > Moral Philosophy and Applied Ethics
  • AI and Ethics AI and Ethics

    This journal seeks to promote informed debate and discussion of the ethical, regulatory, and policy implications that arise from the development of AI. It focuses on how AI techniques, tools, and technologies are developing, including consideration of where these developments may lead in the future.

Related Collections

With Collections, you can get published faster and increase your visibility.

AI Agents: Ethics, Safety, and Governance

AI Agents: Ethics, Safety, and Governance examines the ethical, practical, and societal implications of the shift from AI systems that respond to AI systems that act. In this collection, an AI agent refers to an AI system that, given objectives and constraints, can select, sequence, and execute actions that alter digital or physical states. Such systems may use tools, write or run code, revise plans, interact with software or physical environments, and engage with other agents with varying levels of autonomy. As these agents enter workplaces, public administration, healthcare, finance, and critical infrastructure, they raise urgent questions about responsibility, oversight, alignment, safety, and public accountability that cannot be addressed using frameworks designed for static or conversational models.

The aim of this topical collection is to develop an interdisciplinary foundation for understanding and governing agentic AI. The collection seeks to clarify contested concepts such as agency, autonomy, intention, responsibility, and trustworthiness, and to examine how these operate when AI systems act within sociotechnical environments. We welcome conceptual, empirical, legal, policy, and practice-oriented work that advances ethical and governance frameworks suited to systems that act, adapt, learn from feedback, and collaborate with humans or other agents. A further objective is to stimulate methodological innovation, particularly in evaluating dynamic and context-sensitive behaviours that emerge over time rather than in isolated interactions.

The scope of the collection spans several core areas. These include the ethics of human–agent collaboration and anthropomorphism, the representation of plural values in globally deployed systems, the behaviour of agents within multi-agent ecosystems, and the need for evaluation methods that capture long-term behaviour in real-world contexts. The collection also covers alignment and safety for systems capable of self-directed planning or goal modification, questions of responsibility and liability in distributed settings, challenges of transparency and intelligibility in multi-step agentic action, the integration of agents into organisational and institutional processes, and the risks associated with malicious misuse, security vulnerabilities, or adversarial adaptation. Together, these areas reflect the central objective of the topical collection: to consolidate emerging research on agentic AI and articulate the conceptual and methodological tools required for responsible development, deployment, and governance.

By bringing together perspectives from philosophy, AI safety, law, sociology, policy, human–computer interaction, and related fields, this topical collection seeks to help shape Agentic AI Ethics as a coherent and critically engaged area of inquiry suited to an era of increasingly autonomous, action-capable AI systems.

Please find a detailed call for papers here

Publishing Model: Hybrid

Deadline: May 31, 2026