How AI is rewiring the human brain: the generational transformation of cognition and knowing

The full paper examines how artificial intelligence (AI) is transforming not only what humans know but how knowledge itself is constructed, remembered and valued. It argues that AI has evolved from a tool of efficiency into a system that reframes cognition, morality and identity across generations.
How AI is rewiring the human brain: the generational transformation of cognition and knowing
Like

Share this post

Choose a social network to share with, or copy the URL to share elsewhere

This is a representation of how your post may appear on social media. The actual post will vary between social networks

Explore the Research

SpringerLink
SpringerLink SpringerLink

How AI is rewiring the human brain: the generational transformation of cognition and knowing - AI & SOCIETY

This Open Forum paper examines how artificial intelligence (AI) is transforming not only what humans know but how knowledge itself is constructed, remembered and valued. It argues that AI has evolved from a tool of efficiency into an epistemic infrastructure, a system that reframes cognition, morality and identity across generations. Using Rousseau’s concept of conscience, Heidegger’s enframing (Gestell), and Postman’s technopoly as lenses, the paper situates today’s cognitive transformation within a philosophical lineage from natural conscience to predictive cognition. It proposes that the rise of AI-mediated environments represents an epistemological rupture—a transition from embodied, effortful knowledge-making to instantaneous, machine-guided cognition. Tracing five generational cohorts from Baby Boomers to Generation Alpha, it identifies a widening gap between those who were relatively AI-independent to a generation that is developing interface-based cognition, with high dependence on AI learning environments. The implications are neurological as well as epistemological. Insights from neuroscience and cognitive psychology indicate that reliance on generative systems may weaken neural pathways linked to memory, reflection, and metacognitive control. The paper introduces the concept of epistemic sovereignty—the capacity to author knowledge independently—and argues that its erosion signals not diminished intelligence but diminished authorship. As analogue generations disappear, so too may the brains unshaped by algorithmic mediation. Preserving their epistemic virtues will require deliberate design and regulation of learning environments that restore friction, ambiguity and cognitive struggle as essential features of human development. The paper calls for an epistemology of resistance—an intentional re-authoring of the mind in the age of artificial cognition. As such, this paper develops a discussion framework for cognitive sovereignty in AI-saturated environments and outlines strategic implications for education, work and policy.

AI is rewiring the human brain not only by delivering new information, but by changing the conditions under which thinking, learning, and judgement take place. For centuries, knowledge has required the effort of searching, doubting, remembering, connecting ideas, making mistakes, revising. That process is slow and sometimes frustrating, but it is also formative. It trains attention, builds memory, and develops the capacity to live with uncertainty long enough to reach understanding. What is different about today’s AI systems is that they do not merely assist in that process. They increasingly replace it with immediate answers. AI is no longer just a tool we use but it is becoming the environment in which we think. That shift feels like progress, and in many situations it is. The problem is what happens when it becomes normal. When a system reliably produces plausible answers on demand, mental labour subtly moves from building knowledge to selecting and verifying. The work of thinking is no longer to construct meaning but to check whether the meaning delivered is acceptable. Over time, the brain adapts to the path of least resistance. What it practises becomes what it prefers. If the default cognitive posture becomes “ask, receive, move on,” then fewer people will regularly exercise the deeper skills that make independent thinking possible. These deeper skills include sustained attention, memory formation, and the patience to sit with ambiguity.

This is not a uniform change. It is generational. Each generation has grown up inside a different “knowledge ecology,” a different set of everyday conditions that shape cognition. Baby Boomers matured in an era of information scarcity, where learning required libraries, long texts, and slow conversations, and where effort was not optional but built into the system. Gen X lived through the transition, developing an analogue foundation first and only later gaining digital tools, learning to think without permanent assistance before tools arrived to amplify their capabilities. Millennials grew up in a hybrid world, with an offline childhood that steadily gave way to the internet, search engines, and mobile connectivity. Gen Z matured in an environment in which algorithms are constantly co-deciding what is seen and valued through feeds, recommendations, notifications, and always-on flows. Gen Alpha is now coming of age with generative AI as the default: voice interfaces, personalised content, and “answers on demand” as the standard mode of access to knowledge. Because the brain develops through repetition and habit, the environments we normalise matter. An environment that removes friction where not-knowing is instantly resolved, where searching becomes unnecessary, and where effort is minimised, can quietly reduce the training load placed on core cognitive capacities. The risk is not that younger people are less intelligent. The risk is that they become less practised in the kinds of thinking that remain essential when the questions are complex, when the stakes are moral, when the answers are contested, and when the world is not neatly predictable. Deep comprehension, creativity beyond the average, independent learning, and the ability to detect manipulation all depend on the cognitive “muscles” that friction trains.

At the centre of this concern sits a concept that deserves to be more widely understood and that I have called epistemic sovereignty. In plain language, it is the ability to produce knowledge independently and remain the author of one’s own judgement. It is not simply a matter of intelligence. It is a matter of authorship. Can a person reason without a system pre-structuring the path? Can they build and retain understanding in memory rather than outsourcing it entirely? Can they tolerate ambiguity long enough to arrive at a judgement that is genuinely their own? If AI becomes the default mediator of knowledge, the danger is not that people stop thinking altogether, but that they stop owning the thinking that matters. Neuroscience and psychology do not offer simplistic verdicts, and the evidence base is complex and context-dependent. Yet the broad direction is clear enough to warrant urgency because attention, memory, and self-control develop through use and practice. Digital environments can shift the balance toward fast rewards and weaker sustained attention, while encouraging the outsourcing of memory because “everything is somewhere online.” Generative AI adds a further twist. When a system can produce text, ideas, and solutions, users can drift into the role of consumer and editor rather than maker. That is not inherently harmful, but as a dominant mode of learning it changes what the brain rehearses and therefore what it becomes good at.

This is also why the debate cannot be reduced to technology alone. The question is philosophical as much as practical: what kind of humans are we shaping when knowledge is treated as a commodity delivered instantly, rather than as a capacity developed over time? Thinkers such as Rousseau, Heidegger, and Postman help articulate what is at stake. Rousseau’s concern for conscience reminds us that moral judgement is cultivated through reflection and practice. If systems deliver answers and ready-made “judgements” too early, people may practise their own less. Heidegger warned that technology can “enframe” the world as something primarily available, optimisable, and usable. This turns everything into a resource, including attention and knowledge, and ultimately people themselves. Postman’s critique of technopoly described societies that begin treating technology as their highest authority, confusing truth with what systems can generate, rank, and scale. In everyday terms, the fear is simple… If AI becomes the default authority on what counts as knowledge, then human wisdom – context, doubt, moral nuance – may be sidelined as inefficient.

One consequence is a future shaped by an epistemic divide, one of two kinds of thinkers. On one side are those who can still “struggle to know,” who can build understanding, sustain attention, remember, reason, and form independent judgement. On the other side are those who mainly think through systems. They are fast and capable in many tasks, but increasingly dependent on external cognition. That divide would not merely be academic. It would shape who can handle complexity in workplaces, who can lead responsibly, who can resist manipulation, and who can create genuinely new ideas rather than recombining what a system predicts is most likely.

None of this leads to a sensible conclusion of “AI away.” The task is not to remove AI, but to use it wisely. The challenge is design. If AI removes friction from thinking, then education, families, organisations, and policymakers must deliberately reintroduce some of that friction in the places where it matters most. That means building rhythms in learning where people think first and consult tools later. It means valuing process as much as output, rewarding reasoning, revision, and reflection rather than only speed. It means teaching the difference between knowing and retrieving. Knowing is being able to explain, apply, connect, and critique whilst retrieving is being able to look it up or generate it. It means treating AI as a sparring partner rather than a substitute. We need to keep asking what assumptions sit behind an answer, what counterarguments exist, what evidence is missing, and how one would explain the idea in one’s own words.

Children and adolescents, in particular, deserve careful protection – not only through restrictions, but through the design of habits that build cognitive strength. Screen-free blocks. Reading and writing without assistance. Play and learning that are not immediately solution-driven. Practice of concentration through music, sport, long texts, and sustained projects. These are not nostalgic preferences. They are forms of cognitive training, and they become more important, not less, in a world where answers are cheap and effortless. The most important line to hold onto is this… If AI removes friction from thinking, we must choose where to put it back. Otherwise, we risk losing not intelligence, but authorship over our own minds.

Please sign in or register for FREE

If you are a registered user on Research Communities by Springer Nature, please sign in

Go to the profile of Lei Liu
12 days ago

This is an excellent and illuminating idea concerning the function of AI. I would like to draw a comparison between Professor Westerbeek's proposal and a situation in which an individual is controlled by her parents or some other authority: If parents constantly dictate their child's choices throughout her upbringing, her individual autonomy may be severely compromised. However, from the parents' perspective, the child often proves difficult to control—she may choose certain activities or hobbies that her parents disapprove of. Yet parental intervention does not necessarily prevent the child from becoming a responsible person. As some researchers have found, different individuals develop into distinct persons even when raised in identical environments. 

Thank you so much!

Reflection on this topic was much needed

Follow the Topic

Brain-machine Interface
Mathematics and Computing > Computer Science > Artificial Intelligence > Robotics > Brain-machine Interface
Cognitive Development
Humanities and Social Sciences > Behavioral Sciences and Psychology > Developmental Psychology > Cognitive Development
Artificial Intelligence
Mathematics and Computing > Computer Science > Artificial Intelligence
Epistemology
Humanities and Social Sciences > Philosophy > Epistemology
Education Policy
Humanities and Social Sciences > Education > Social Education > Education Policy
Cognitive Neuroscience
Life Sciences > Biological Sciences > Neuroscience > Cognitive Neuroscience
  • AI & SOCIETY AI & SOCIETY

    This journal focuses on societal issues including the design, use, management, and policy of information, communications and new media technologies, with a particular emphasis on cultural, social, cognitive, economic, ethical, and philosophical implications.