Can Self-Awareness Exist Without a Self? What Indian Philosophy Asks of Consciousness Science and AI
Published in Neuroscience, Computational Sciences, and Philosophy & Religion
Last Tuesday, at the Collège de France, Isabelle Ratié delivered the sixth lecture in her 2025-2026 series on the Indian quarrel over the self. The title, “Conscience de soi et conscience du soi,” drew a line between two things that consciousness research often treats as a single package: being aware of oneself, and being aware of a self. The lecture reconstructed, with considerable precision, a set of ancient Indian arguments showing that these two can come apart. The implications for current debates in consciousness science and AI are, I believe, substantial and underexplored.
I work at the intersection of psychology, neuroscience, and AI ethics. My research focuses on how emotions are interpreted, suppressed, and computationally modeled, with particular attention to what happens when algorithmic systems claim authority over affective states. Attending Ratié’s lecture, I found that several problems I have been framing in the language of affective science and machine ethics had already been dissected, with remarkable formal clarity, in traditions dating back over two millennia.

Self-luminosity and the higher-order problem
One of the central disputes in consciousness studies concerns whether a mental state needs to be represented by a higher-order state in order to be conscious. Rosenthal’s higher-order thought theory holds that a first-order perception becomes conscious only when a second-order thought represents it. Critics have long pointed to the regress problem: if C1 requires C2 to be conscious, and C2 requires C3, the chain never terminates.
Buddhist epistemologists identified this problem centuries ago. Dignāga and, more explicitly, Dharmakīrti argued that every cognition illuminates both its object and itself simultaneously. Kellner (2011) provides a detailed comparison of their regress arguments. The solution they proposed, termed svasaṃvedana (self-awareness or self-luminosity), holds that awareness of a mental state is not a separate higher-order event but a structural feature of the state itself. On this account, consciousness does not require an observer hovering above it. It is, in its very occurrence, transparent to itself.
What makes the Indian debate instructive is not simply this solution but the divergence that follows from it. Dharmakīrti and the Buddhist tradition conclude that what is self-luminous is the momentary cognitive event, not an enduring subject behind it. Self-awareness exists; a self does not. The Mīmāṃsā school of Prabhākara accepts virtually the same phenomenological structure, the simultaneous manifestation of object, cognition, and cognizing subject, but reads the subject as already given within each moment of awareness. Identical experiential data, opposite ontological conclusions.
This split maps onto a tension within contemporary consciousness studies that has not been sufficiently articulated. The minimal self of Zahavi and Gallagher, the pre-reflective sense of ownership accompanying experience, resembles the Buddhist position: self-awareness without commitment to a substantial self. Narrative and robust conceptions of selfhood push toward the Mīmāṃsā reading: the subject is already there in the structure. The Indian debate suggests that the choice between these positions cannot be settled by phenomenological description alone; it requires explicit ontological argument. Without such argument, researchers risk smuggling metaphysical commitments into what appears to be neutral description.
The inferential gap in affective computing
A second line of argument from Ratié’s lecture bears directly on AI and emotion. The Nyāya-Vaiśeṣika tradition, particularly the thinker Praśastapāda, concedes that the self is never directly perceived. Instead, it is inferred through a two-step process: from perceptual contents, one infers the sense organs; from the unified activity of the sense organs, one infers the subject that makes such unity possible.
Current affective computing systems perform an operation structurally analogous to the first inference. They read behavioral and physiological signs, facial muscle movements, vocal prosody, text sentiment, galvanic skin response, and infer an emotional state. Barrett (2017) and others have critiqued the ontological assumptions underlying this inference, questioning whether discrete emotion categories correspond to consistent biological signatures. But there is a deeper structural issue that the Indian debate makes visible.
The algorithm stops at the first inference. It does not proceed to infer the interpreting subject, the locus where emotional states cohere into a first-person life. And what it does not infer, it operationally treats as nonexistent. Porębski and Figura (2025) have argued in this journal’s sibling publication that AI cannot possess consciousness. Their argument is ontological. The point I am raising is different: it concerns how AI systems make implicit ontological judgments about the humans they assess. When the gap between sign and subject is treated as noise rather than as a structural feature of first-person experience, a technical limitation converts into an ontological verdict about the people being measured.
In my own work, I have described this conversion as Algorithmic Affective Blunting: the systematic flattening of emotional meaning that occurs when computational systems replace first-person interpretation with third-person inference. The Indian debate provides a formal articulation of why this flattening occurs. Nyāya’s concept of negative signs, what Ratié termed signes apophatiques, is particularly clarifying. The inferential markers, pleasure, pain, breath, cognitive activity, each point toward the self without ever fully capturing it. They are structurally insufficient. An algorithm that reads these markers and stops has mistaken the indicator for the indicated.
Self-reference and its formal limits
The most formally provocative moment in the lecture concerned metacognition. The Nyāya commentator Vātsyāyana proposed a sequential model: a first cognition grasps an object, then a second cognition takes the first as its object. Dharmakīrti’s response was that this model generates an infinite regress, and that the regress is resolved only if every cognition is self-intimating from the start. Lo (2018) has further analyzed the structure of this regress argument.
A parallel position, attributed to the Mīmāṃsā thinker Kumārila, holds that cognition cannot know itself. Ratié’s reconstruction made the structural resemblance to Gödelian incompleteness difficult to ignore: a system that operates on its own states may be constitutively unable to fully capture itself from within. Dharmakīrti’s counter, that self-manifestation is not self-description, offers what may be the most radical alternative to this limitation. If awareness of a mental state is not a representational act but a non-representational feature of the state’s occurrence, then the Gödelian constraint, which applies to representational systems, may not apply.
For AI research, the distinction matters. When we ask whether a machine is self-aware, we typically mean: can the system represent its own internal states? This is representational self-reference, and it is subject to familiar formal constraints. The Indian alternative asks whether there is a mode of self-relation that is not representation at all. If such a mode exists in biological consciousness but is absent from computational architectures, then the absence is not a gap to be engineered away. It is a categorical difference, and confusing the two has consequences for how we assess both machine cognition and the authority of machines over human emotional life.
Toward better questions
The Indian philosophical tradition does not hand us answers to the problems of consciousness science or AI ethics. What it offers is a higher resolution of questioning. “Is the system self-aware?” is too coarse. “Does self-awareness entail a self?” is more productive. “Can self-manifestation be reduced to self-representation?” is more productive still. When we skip these distinctions, we risk building systems that make ontological judgments about persons under the guise of technical measurement, and we risk theoretical frameworks in consciousness science that conflate phenomenological description with metaphysical commitment.
Ratié’s lecture series continues at the Collège de France through the spring. The full recordings are publicly available. For those working on consciousness, affect, or machine intelligence, the detour through classical Indian philosophy is not a detour at all. It is a return to questions that were asked with a precision we have not yet matched.
References
Kellner, B. (2011). Self-awareness (svasaṃvedana) and Infinite Regresses: A Comparison of Arguments by Dignāga and Dharmakīrti. Journal of Indian Philosophy, 39, 411-426.
Lo, K.C. (2018). On the Argument of Infinite Regress in Proving Self-awareness. Journal of Indian Philosophy, 46, 553-576.
Barrett, L. F. (2017). The theory of constructed emotion: an active inference account of interoception and categorization. Social Cognitive and Affective Neuroscience, 12(1), 1-23.
Porębski, A., & Figura, J. (2025). There is no such thing as conscious artificial intelligence. Humanities and Social Sciences Communications, 12(1).
Ratié, I. (2025-2026). Conscience et identité: la querelle indienne du soi. Cours, Collège de France.
Follow the Topic
-
Humanities and Social Sciences Communications
A fully open-access, online journal publishing peer-reviewed research from across—and between—all areas of the humanities, behavioral and social sciences.
Your space to connect: The Psychedelics Hub
A new Communities’ space to connect, collaborate, and explore research on Psychotherapy, Clinical Psychology, and Neuroscience!
Continue reading announcementRelated Collections
With Collections, you can get published faster and increase your visibility.
Tourists, go home? Sustainable tourism in popular destinations
Publishing Model: Open Access
Deadline: Dec 18, 2026
Interdisciplinarity in theory and practice
Publishing Model: Open Access
Deadline: Dec 31, 2026
Please sign in or register for FREE
If you are a registered user on Research Communities by Springer Nature, please sign in