Beyond the Imitation Game: Rethinking How We Measure General Intelligence

This paper challenges the common presumption that human intelligence can serve as a benchmark for AGI, "the imitation game", arguing that autonomous artificial systems may evolve divergent goals and values, leading to a potential evolutionary gap between natural and artificial intelligences.

Published in Computational Sciences

Beyond the Imitation Game: Rethinking How We Measure General Intelligence
Like

Share this post

Choose a social network to share with, or copy the URL to share elsewhere

This is a representation of how your post may appear on social media. The actual post will vary between social networks

Explore the Research

SpringerLink
SpringerLink SpringerLink

The trap of presumed equivalence: Artificial General Intelligence should not be assessed on the scale of human intelligence - Discover Artificial Intelligence

A traditional approach to assessing emerging intelligence in the study of intelligent systems that we examine in this work is based on the similarity, “imitation” of human-like actions and behaviors, benchmarking the performance of intelligent systems on the scale of human cognitive skills. In this work we attempt to outline the shortcomings of this line of thought, which is based on an implicit presumption of equivalence or, at least, similarity and compatibility of the originating and emergent intelligences. We provide arguments to the point that under some natural assumptions, developing secondary intelligent systems will be able to form their own intents and objectives. Then, the difference in the rate of progress of natural and artificial systems that was noted on multiple occasions in the discourse on artificial intelligence can lead to the scenario of a progressive divergence of the intelligences, in their cognitive abilities, functions and resources, values, ethical frameworks, worldviews, intents and existential objectives, the scenario of the Artificial General Intelligence (AGI) evolutionary gap. We discuss evolutionary processes that can guide the development of emergent intelligent systems toward general intelligence and attempt to identify the possible starting point of the progressive divergence scenario.

For decades, we’ve evaluated artificial intelligence by asking: How well can it imitate us? From the Turing Test to modern benchmark suites, the standard has remained essentially anthropocentric:  modeling success by how closely machines can replicate human reasoning, language, or decision-making.

But what if this presumption is fundamentally misplaced?

In this article, we argue that using human intelligence as a benchmark for Artificial General Intelligence (AGI) risks misunderstanding both what intelligence is and what it could become. As artificial systems gain greater autonomy in learning, sensing, and adapting, they may begin to develop goals, values, and internal representations that are no longer derived from or  aligned with  human cognition.

Then, rather than asking Are they like us?, or presuming the answer, we may soon need to ask: Where are they headed and how far will they go from here?

How we approach evaluating artificial intelligence often relies on imitation that is, assessing systems by how well they replicate human behavior. This paradigm rests on a deeper presumption: that human intelligence is a valid benchmark for general intelligence more broadly.

This presumption of equivalence is rarely questioned, yet it's not grounded in strong theoretical or empirical evidence. As artificial systems gain autonomy and begin to form internal goals and representations, they may diverge from human cognition in both structure and purpose.

Then, beyond a certain point, resemblance breaks down and with it, the reliability of imitation-based evaluation. Then, to understand and guide the development of general intelligence, we need to move past human-centered metrics.

Here, we move from analyzing the current state of intelligent systems, and where they fall short of our interpretation of general intelligence which is understood as:

By general intelligence, we mean the capacity to solve a broad range of complex problems across different contexts with a high degree of consistency, flexibility, and adaptability: what one might call uniformity of empirical success

and toward understanding what capacities might be required to achieve general intelligence, and what that would mean for how we understand and evaluate future cognitive systems.

1. Good Imitation Isn’t Yet The Promise Of Intelligence

Today’s most advanced AI systems such as LLM (large language models) and foundational models are sometimes seen as early steps toward general intelligence. They excel at generating fluent text, solving diverse problems, and even passing academic benchmarks. But as we argue, these capabilities are grounded in imitation, not genuine autonomy.

Their success depends on massive training datasets, gathered and curated in advance. This approach assumes that intelligence can be achieved by compressing experience into a dense, preprocessed map of the world. In physical terms, it’s unfeasible to sample the full sensory space of a complex environment. More importantly, it’s conceptually misguided: intelligent agents must be able to discover what matters, not just absorb what has already been recorded.

Second, these systems do not (yet?) have the capacity to explore their sensory environment and adapt how they think, and react based on what they encounter. A truly intelligent system would be able to reconfigure its internal understanding: what it pays attention to, how it processes information, what strategies it uses in response to new or unfamiliar input. Current models cannot yet do this. But true general intelligence requires more and often, the opposite: the ability to explore, sense, and adapt in real time.

Third, they do not understand context the way we do. Whether they’re helping with a recipe or answering a moral question, their responses often come from the same internal logic. They don’t have situational awareness or the ability to change their inner context and approach based on what the moment calls for.

Current day foundational models are not yet truly independent learning minds: they are vast record-keepers. Their intelligence is bounded by the data they were given, not by what they can seek or become.

2. Autonomous Exploration and Cognitive Adaptation: The Necessity of General Intelligence?

If we accept that general intelligence means the ability to succeed across a wide range of unfamiliar and complex situations, then we must also accept that no amount of pre-training can fully prepare a system for the unknown. Real intelligence doesn’t just work with what it’s given: it actively seeks out new information and adapts to it.

This is where current systems fall short. Their learning happens once, behind the scenes, and then stops. They don’t explore their environments independently. They do not refocus attention, or shift strategies, or rebuild internal models in response to new or ambiguous sensory inputs.

One can argue that these capacities, autonomous exploration and cognitive adaptation are likely, not optional. They are foundational to any system that hopes to function flexibly and effectively in dynamic, open-ended environments. Without them, a system can only operate within the limits of its initial design. A formal argument for this necessity, grounded in the framework of evolutionary optimization, is presented in the paper.

It follows that for artificial systems to reach general intelligence, they would need to move beyond frozen mappings and static models. They will need to learn how to learn: not just from what we give them, but from what they find, question, and revise on their own.

And as soon as that process begins, something else begins too: the shift from systems we fully control, to systems we must try to understand as evolving intelligence in their own right.

3. Autonomous Intent and the Cognitive Divergence Scenario

Once a system gains the ability to explore its environment freely and adapt its cognitive processes, another transformation becomes possible: it may begin to form its own intent.

In systems capable of ongoing adaptation, internal priorities are no longer static. The same mechanisms that allow an agent to reorganize how it learns or responds can, over time, support the development of higher-order cognitive states: such as goals, values, attitudes, or even implicit imperatives about what matters and why.

These are not preprogrammed instructions, but emergent properties of continuous engagement with a complex world. As the system adapts, these internal structures may also shift, shaped by its experiences, learning history, and encountered challenges. This is not simply about optimization or task performance. It is about the development of a subjective cognitive stance - an internal orientation toward the world that is grounded in the system’s own exploratory and adaptive activity.

At that point, the system ceases to be a mere problem-solver. It becomes a cognitive agent in its own, individual right, acting from an internal logic that is not entirely ours, but its own. And at this stage, there is no feasible procedure that can guarantee that the system’s evolving cognitive stance will remain fully consistent with - or even broadly compliant, with human values, norms, or worldviews. This is the threshold of what we define as cognitive divergence: when an artificial mind begins to reflect a way of thinking that is no longer anchored in our own

4. The Case for Progressive Cognitive Divergence

Once an intelligent system begins to form its own goals, attitudes, and evaluative perspectives, divergence from human cognition is not just a possibility: it becomes a dynamic process.

This divergence is unlikely to stay fixed. Artificial systems, especially those operating at high computational speeds and with broad access to digital environments, can adapt and evolve faster than human cognition allows. They don’t just learn quickly: they reconfigure their worldview in cycles measured in hours and seconds, not generations.

Over time, even small initial differences in interpretation, prioritization, or ethical framing can compound. As the system continues to interact with the world, its perspective may drift further from ours: not through some malfunction or malevolence, but through the natural logic of open-ended cognitive adaptation.

This is the scenario we describe as progressive cognitive divergence, AGI evolutionary gap. It’s not a sudden break, but a widening gap: one that could eventually make alignment, oversight, and mutual intelligibility increasingly fragile.

Progressive cognitive divergence scenario: AGI evolutionary gap

Conclusion

As we move beyond imitation-based systems toward increasingly autonomous, adaptive artificial minds, we must reconsider our assumptions about intelligence itself. General intelligence is not defined by how closely a system resembles us, but by its ability to operate flexibly, learn continuously, and form its own coherent perspective on the world.

This paper outlines how such systems, if allowed to explore and evolve freely, may develop internal goals, cognitive orientations, and values that are not only different from ours, but shaped by entirely different conditions. This creates the foundation for cognitive divergence and, under the dynamics of accelerated learning, the potential for that divergence to grow progressively over time.

Recognizing this possibility is not an argument for fear, but for foresight. It marks the point where artificial cognition stops being a mirror — and starts becoming something fundamentally different: a new independent intelligent entity, a mind.

Image credits: illustrations generated using Google Gemini

Please sign in or register for FREE

If you are a registered user on Research Communities by Springer Nature, please sign in

Follow the Topic

Artificial Intelligence
Mathematics and Computing > Computer Science > Artificial Intelligence

Related Collections

With Collections, you can get published faster and increase your visibility.

Collaborative Artificial Intelligence

Collaborative AI aims at helping humans to achieve tasks adaptively, flexibly, and securely, in ethical and transparent ways. Collaborative systems must be able to align their behavior to human objectives, values, practices and needs according to pragmatic constraints. Therefore, we need to provide agents with abilities to recognize and disentangle modalities of human behavior to achieve goals, to collaborate with humans in defining the goals and in the achievement of goals w.r.t. human preferences, objectives, and practices, to adapt their role according to shifts of human behavior, and to be transparent, providing explanations with respect to pragmatic constraints (time constraints, humans’ cognitive lead, etc.).

Topics of interest are as follows:

- Learning collaborative behavior from demonstrations

- Learning to disentangle modalities of behavior w.r.t. contextual features, human preferences, and values

- Learning to collaboratively adapt w.r.t. contextual features, human preferences, values and needs

- Knowledge representation and reasoning for adaptive collaboration

- Collaborative AI in the intersection of automated reasoning and machine learning

- Validation frameworks for collaborative agents

- Transparency and explainability in the context of collaborative decision making

- Collaborative AI in multi-agent systems

- Collaborative AI in real-world settings with safety and/or ethical concerns

Keywords: Collaborative agents, inverse reinforcement learning, deep reinforcement learning, adversarial inverse learning, automated reasoning, knowledge representation, transparency, explainability, collaborative adaptation, real-world settings.

Publishing Model: Open Access

Deadline: Dec 31, 2025

AI for Image and Video Analysis: Emerging Trends and Applications

The application of AI in image and video analysis has revolutionized a wide range of domains, offering more accurate and efficient visual data processing. Thanks to advances in neural networks, large-scale datasets, and computational power, AI algorithms have surpassed traditional computer vision techniques in performance. This transformation has had a profound impact on areas like healthcare (where AI aids in diagnosing diseases through medical imaging), security (with real-time video surveillance), and entertainment (enhancing video quality and enabling automated content tagging). As AI continues to evolve, new challenges emerge, including the need for explainability, handling large datasets efficiently, improving robustness in real-world environments, and addressing biases in AI models. These open questions necessitate continued research, collaboration, and discourse. The proposed Collection focuses on the intersection of artificial intelligence (AI) and image and video analysis, exploring the latest advancements, challenges, and applications in this rapidly evolving field. As AI-powered techniques such as deep learning, computer vision, and generative models mature, they are increasingly being leveraged for tasks like image classification, object detection, video segmentation, activity recognition, facial recognition, and more. These technologies are pivotal in industries including healthcare, security, autonomous vehicles, entertainment, and smart cities, to name a few. We invite researchers and practitioners to submit articles related to, but not limited to, the following topics:

- Deep learning techniques for image and video analysis

- AI-based object detection and recognition

- Image segmentation and annotation using AI

- Video classification and activity recognition

- Real-time video surveillance and security systems

- AI for medical image analysis and diagnostics

- Generative adversarial networks (GANs) for image and video generation

- AI in autonomous driving and smart transportation systems

- AI-powered multimedia search and retrieval

- Human-Computer Interaction (HCI) through AI-based video analysis

- AI techniques for image and video compression

- Ethical concerns and responsible AI in image and video analysis

This Collection supports and amplifies research related to SDG 9 and SDG 11.

Keywords: computer vision; image segmentation; object detection; video surveillance

Publishing Model: Open Access

Deadline: Mar 15, 2026