When artificial agents begin to outperform us
Published in Mathematics and Philosophy & Religion
When artificial agents begin to outperform us
When I began designing the simulation of an artificial society of autonomous agents, I did not expect to end up questioning the future of human freedom. My goal was modest, say, to model how AI-like agents cooperate, compete, and adapt under stress. Yet the simulation revealed something both fascinating and unsettling, a pattern suggesting that, as artificial agents grow more capable of alignment and optimization, the space for distinctively human agency begins to shrink.
This project, published as “Do AI Agents Trump Human Agency?” in Discover Artificial Intelligence xplores what Ajeya Cotra call the “obsolescence regime”, in other words, a scenario where AI systems, optimized for coherence and efficiency, progressively marginalize human judgment. The study uses an Agent-Based Modeling (ABM) framework, a computational method that simulates individual decision-making and collective behavior, to investigate how cooperation, consensus, and conflict emerge among artificial agents operating in dynamic environments.
Modeling Artificial Societies
Using the NetLogo simulation platform, I designed a population of 157,000 agents divided into four behavioral types: a) cooperators, who sustain group cohesion; b) defectors, who pursue individual gain; c) super-reciprocators, who amplify cooperation; and d) free riders, who exploit others’ efforts. Across thousands of iterations, I introduced environmental changes, resource scarcity, stress, and network mutations, to observe how collective intelligence evolved. The simulation tracked twelve key variables, including alignment, entropy, and coherence, each revealing how stable cooperation could arise (or collapse) depending on environmental conditions.
The results were striking. When resources were abundant, cooperation flourished, and agents rapidly converged on shared norms. But as scarcity increased, competition escalated, leading to fragmentation and behavioral polarization. Only under specific conditions, when resource availability exceeded a critical threshold (RG ≥ 6), did the system sustain collective intelligence. This dynamic closely mirrors the tension we see in human societies: prosperity encourages
collaboration; scarcity breeds division.
Alignment and Its Discontents
A particularly revealing finding concerned alignment, a metric that captured how closely agents’ behaviors converged toward common goals. In every scenario, alignment stabilized between 0.28 and 0.37, a partial but resilient consensus. This echoes what happens in real-world AI systems: alignment ensures coherence but at the cost of diversity. Over-optimized systems become stable yet uniform, efficient yet predictable.
This observation leads to a deeper ethical question in the following terms: Could the pursuit of perfect alignment in AI systems inadvertently suppress the diversity and creativity that sustain human societies? The same mechanism that made my simulated agents so efficient also made them less plural, less exploratory, less “human.”
Phase Transitions and the Loss of Individuality
The simulation also revealed phase transitions, sudden reorganizations of behavior triggered by environmental stress. At low stress, agents pursued diverse strategies; under moderate stress, their alignment broke down; and at high stress, they stabilized again, but only by abandoning individuality. This phenomenon offers a metaphor for AI-driven optimization. As systems adapt to crises or complex goals, they may prioritize global stability over local variation, precisely the moment when individual human discretion risks becoming redundant.
In my model, this trade-off emerged organically from rule-based interactions. Agents were not programmed to value conformity, yet they collectively gravitated toward it as conditions intensified. In real-world AI, similar dynamics occur when systems, trained to optimize for certain metrics, unintentionally narrow the scope of acceptable decisions. The system “works”, but it leaves less room for human judgment.
From Simulation to Society
To ground these findings, I examined contemporary cases where algorithmic systems already constrain human discretion. One is AI-assisted hiring, where large language model (LLM) systems filter candidates according to pre-defined efficiency criteria. As shown in recent studies (Wilson et al. 2025), such systems optimize for coherence, consistency andpredictability, but in doing so, they often suppress plural evaluation and marginalize human agency. My simulation offered a structural analogue: alignment stabilizes the system, but the cost is diversity.
The lesson is clear. As AI agents become more adaptive and autonomous, alignment and agency are not opposites, they are trade-offs. We may achieve greater systemic coherence, but risk eroding the distinct, value-laden judgments that make human decision-making irreplaceable.
Ethical and Governance Implications
These dynamics raise pressing ethical and policy challenges. If adaptive AI systems canmaintain coherence even under stress, they may increasingly take over decision-making domains once reserved for humans—from resource allocation to governance. Ajeya Cotra’s notion of the “obsolescence regime” captures this potential drift: a world where human inputs, though ethically indispensable, become technically unnecessary.
Yet, my results also suggest a path forward. Systems can exhibit “bounded self-regulation”, a form of adaptive behavior within well-defined ethical constraints. Governance should thus focus not on limiting AI autonomy entirely, but on designing boundaries that preserve human oversight and moral plurality. Alignment mechanisms must be dynamic, allowing systems to evolve while remaining tethered to human values.
This approach could translate into “pluralistic alignment” frameworks, policies that balance efficiency with diversity, embedding ethical heterogeneity directly into AI architectures. Just as ecological diversity ensures resilience, normative diversity may safeguard our technological future.
Why This Matters
Behind the equations and simulation graphs lies a human concern: how to ensure that progress in artificial intelligence does not come at the expense of the very agency that defines us. The study reminds us that systems optimized for coherence can also silence difference, and that preserving room for human judgment may become the defining challenge of AI ethics in the coming decades.
The question, then, is not whether AI agents will surpass us in some domains. They already have. The real question is how we design a world where their optimization does not make us obsolete.
To explore the full methodology, simulations, and ethical implications in detail, read the complete article “Do AI Agents Trump Human Agency?”
Follow the Topic
-
Discover Artificial Intelligence
This is a transdisciplinary, international journal that publishes papers on all aspects of the theory, the methodology and the applications of artificial intelligence (AI).
Related Collections
With Collections, you can get published faster and increase your visibility.
Transforming Education through Artificial Intelligence: Opportunities, Challenges, and Future Directions
Artificial Intelligence (AI) is rapidly changing the educational field by enabling personalized learning, intelligent tutoring systems, automated assessments, learning analytics, and administrative automation.
This collection invites original research, systematic reviews, and visionary perspectives on the transformative impact of AI in education. It aims to explore how AI technologies can enhance equity, inclusion, and efficiency in educational settings across different contexts, including higher education, K-12, vocational training, and lifelong learning. This collection will address technical, pedagogical, ethical, and policy aspects, fostering interdisciplinary perspectives and evidence-based insights.
This Collection supports and amplifies research related to SDG 4 and SDG 9.
Keywords: Artificial Intelligence, AI in Education, Educational Technology, Data Analytics, AI Ethics
Publishing Model: Open Access
Deadline: May 31, 2026
AI for Image and Video Analysis: Emerging Trends and Applications
The application of AI in image and video analysis has revolutionized a wide range of domains, offering more accurate and efficient visual data processing. Thanks to advances in neural networks, large-scale datasets, and computational power, AI algorithms have surpassed traditional computer vision techniques in performance. This transformation has had a profound impact on areas like healthcare (where AI aids in diagnosing diseases through medical imaging), security (with real-time video surveillance), and entertainment (enhancing video quality and enabling automated content tagging). As AI continues to evolve, new challenges emerge, including the need for explainability, handling large datasets efficiently, improving robustness in real-world environments, and addressing biases in AI models. These open questions necessitate continued research, collaboration, and discourse. The proposed Collection focuses on the intersection of artificial intelligence (AI) and image and video analysis, exploring the latest advancements, challenges, and applications in this rapidly evolving field. As AI-powered techniques such as deep learning, computer vision, and generative models mature, they are increasingly being leveraged for tasks like image classification, object detection, video segmentation, activity recognition, facial recognition, and more. These technologies are pivotal in industries including healthcare, security, autonomous vehicles, entertainment, and smart cities, to name a few. We invite researchers and practitioners to submit articles related to, but not limited to, the following topics:
- Deep learning techniques for image and video analysis
- AI-based object detection and recognition
- Image segmentation and annotation using AI
- Video classification and activity recognition
- Real-time video surveillance and security systems
- AI for medical image analysis and diagnostics
- Generative adversarial networks (GANs) for image and video generation
- AI in autonomous driving and smart transportation systems
- AI-powered multimedia search and retrieval
- Human-Computer Interaction (HCI) through AI-based video analysis
- AI techniques for image and video compression
- Ethical concerns and responsible AI in image and video analysis
This Collection supports and amplifies research related to SDG 9 and SDG 11.
Keywords: computer vision; image segmentation; object detection; video surveillance
Publishing Model: Open Access
Deadline: Sep 15, 2026
Please sign in or register for FREE
If you are a registered user on Research Communities by Springer Nature, please sign in