Planetary Health in the Age of Artificial Intelligence: A Structural–Ethical Inquiry into Sustainable AI in Healthcare

Bringing together perspectives from medicine and philosophy, I explore how ideas from sustainable AI and planetary health meet. Inspired by the structural turn in AI ethics, this reflection asks how healthcare technologies can promote wellbeing without undermining the systems that sustain it.
Planetary Health in the Age of Artificial Intelligence: A Structural–Ethical Inquiry into Sustainable AI in Healthcare
Like

Share this post

Choose a social network to share with, or copy the URL to share elsewhere

This is a representation of how your post may appear on social media. The actual post will vary between social networks

Explore the Research

SpringerLink
SpringerLink SpringerLink

Planetary health in the age of artificial intelligence: a structural–ethical inquiry into sustainable AI in healthcare - AI & SOCIETY

The growing planetary poly-crisis—marked by climate change, ecological degradation, and planetary health inequities—necessitates a fundamental transformation in healthcare systems. This paper examines the historical progression of health systems from public health to planetary health, where human well-being is understood within the limits of planetary boundaries. Planetary health provides a framework for sustainable health systems that reduce disease burden, optimize care delivery, and decarbonize healthcare operations. Within this context, artificial intelligence (AI) emerges as a transformative tool for addressing healthcare sustainability. AI applications offer significant potential to optimize resource use, improve preventive care, and enable circular economies in healthcare, thereby facilitating the transition toward planetary health. However, AI poses environmental challenges, including high energy consumption and resource-intensive development processes. To ensure its long-term viability, AI must align with sustainability principles by minimizing its ecological footprint while enhancing ecological integrity and social justice. This paper contributes to the literature in three key ways: (1) it briefly outlines the evolution of health systems toward planetary health, contextualizing the role of sustainability; (2) it presents emerging AI applications that support healthcare sustainability within the planetary health paradigm; and (3) it applies a structural–ethical approach—building on the work of Bolte and van Wynsberghe—to situate AI within the broader socio-technical, institutional, and ecological systems of healthcare, moving beyond second-wave AI ethics’ focus on isolated artifacts and design-level principles. By positioning AI as both a solution and a subject of scrutiny, this study advances the discourse on AI’s potential for planetary health while critically examining its alignment with long-term sustainability goals.

From Medicine to Philosophy: The Beginning of a Question

My journey toward this paper began in the clinic. As a medical doctor, I was fascinated early in my studies by how differently health could be understood—sometimes as the absence of disease, sometimes as balance, sometimes as the flourishing of a whole person in relation to their environment. That diversity of meanings stayed with me, especially as I witnessed how health systems often reduced care to measurable outcomes and technologies.

During my time at the University of Vienna’s Department of Philosophy, where I helped organize a conference titled “AI and the Planet in Crisis.” It was at that event that Aimee van Wynsberghe presented her paper on the third wave of AI ethics and the structural turn. Their argument—that ethics must move beyond principles like fairness and transparency to examine the socio-technical and ecological structures sustaining AI—deeply resonated with me.

Listening to her, I began to see a bridge between two worlds I knew well: medicine and philosophy. Sustainability, which had become central in environmental and health discourses, could also serve as the link between planetary health and structural AI ethics. I realized that if AI was becoming integral to healthcare, then the ethics of AI could not remain detached from the ecological and systemic conditions of health itself. That realization became the seed of this paper.

Rethinking Health in a Time of Polycrisis

The writing began with an unease about the contradictions of our time. While healthcare systems aim to protect human well-being, they contribute significantly to greenhouse-gas emissions and resource depletion. Meanwhile, AI technologies—celebrated as tools for efficiency and progress—depend on energy-hungry computation and extractive supply chains. I wanted to ask: What does it mean to promote health when our tools of care harm the planet that sustains life?

The idea of planetary health provided the conceptual foundation to explore this question. It reframes human well-being as inseparable from ecological stability. Unlike global health, which focuses on cross-border disease control, planetary health emphasizes operating within planetary boundaries—the climatic and biological limits necessary for life to flourish.

This framework transformed my perspective on AI in medicine. It was no longer enough to evaluate AI as an instrument that improves diagnosis or efficiency. We also needed to examine its environmental cost, its labor implications, and the power structures embedded in its infrastructures.

From Public Health to Planetary Health

Tracing the history of health systems helped clarify this trajectory. Public health arose to manage local epidemics and sanitation; global health addressed inequalities between nations; One Health recognized the interconnectedness of humans, animals, and ecosystems. Planetary health extended these ideas by embedding them within ecological limits and justice concerns.

What distinguishes planetary health to me is its insistence that sustainability is not an optional goal but a moral and operational prerequisite for any healthcare system. This insight reframed my understanding of AI: if the health sector is responsible for planetary well-being, then the digital infrastructures supporting it must also meet ecological and ethical standards.

The Structural–Ethical Perspective

The encounter with van Wynsberghe and Bolte’s work gave me a vocabulary to articulate this insight. Their notion of a structural turn in AI ethics argues that ethical evaluation must include the material, political, and ecological systems in which technologies are embedded.

Applying that perspective to healthcare revealed that AI is not a neutral instrument but a structural actor—it shapes clinical priorities, redistributes resources, and influences who receives care and who does not. Moreover, its material existence—servers, data centers, rare-earth minerals—binds it to planetary processes.

This realization turned the paper into a dialogue between planetary health and sustainable AI. I began to see AI as both a potential solution and a source of unsustainability, demanding a more systemic and justice-oriented approach.

Between Promise and Paradox

AI offers extraordinary possibilities for sustainability in healthcare. Predictive models can forecast disease outbreaks and improve prevention. Smart hospital systems can reduce energy consumption. Circular supply chains, aided by AI analytics, can minimize medical waste.

Yet, behind these successes lie paradoxes. Training large models consumes massive electricity. Data centers rely on carbon-intensive grids. Rare minerals used in AI hardware are mined under exploitative conditions. The result is an ethical contradiction: we use energy-hungry algorithms to solve the consequences of an energy-hungry world.

This contradiction became the central ethical tension of the paper: Can AI genuinely advance planetary health without reproducing the very harms it aims to address?

Justice at the Core of Sustainability

To answer, I adopted what I called a structural–ethical approach to sustainable AI. It moves beyond counting emissions to ask whose environments, bodies, and futures bear the costs of AI innovation.

For example, telemedicine may reduce travel emissions, yet it increases digital energy use and may exclude populations without reliable internet access. Efficiency gains in one context may create new inequities in another. True sustainability, I argue, must therefore be justice-centered—addressing distributive, procedural, and recognition justice alike.

This means ensuring equitable access to low-energy AI tools, fair labor conditions in data supply chains, and participatory governance where communities have a voice in technological decisions that affect their health.

Governance for a Planetary Future

Sustainable AI cannot rely on voluntary ethics or corporate responsibility alone. It requires governance architectures that align innovation with ecological limits. Most current frameworks, including the EU AI Act, still prioritize innovation and safety over environmental impact.

Drawing on planetary health ethics, I proposed that AI governance incorporate lifecycle accountability—from resource extraction to e-waste disposal—and embed sustainability metrics similar to environmental, social, and governance (ESG) criteria. Moreover, governance must become participatory: clinicians, engineers, ethicists, and affected communities should co-shape how AI is deployed in healthcare.

Such reforms would turn sustainability from a compliance target into a normative lens guiding the design and regulation of digital health systems.

Writing Between Two Worlds

Writing this paper felt like moving between two intellectual homes. In medicine, I had learned to value precision, outcomes, and practicality. In philosophy, I learned to question assumptions, expose structures, and seek meaning. Bridging these traditions allowed me to treat AI not merely as a technical artifact but as part of the moral and ecological fabric of healthcare.

The process also brought a personal rea. I began to see the practice of medicine itself as an ecological act—one that should heal without harming. Planetary health and structural AI ethics together offer a way to reimagine what it means to care: not only for patients but for the planet that makes healing possible.

Looking Ahead

This research is only a beginning. Future work must develop lifecycle metrics for AI, justice-oriented governance models, and ways to embed planetary health principles directly into AI design.

Ultimately, the question that guided me as a physician remains the same as the one I now ask as a philosopher: How can care endure? To ensure that both humanity and the planet thrive, our technologies must learn to care, too—sustainably, justly, and within the limits of the world that gives them life.

Please sign in or register for FREE

If you are a registered user on Research Communities by Springer Nature, please sign in

Go to the profile of David Haney
about 2 months ago

The notion of AI as a structural actor, as opposed to a para-human agent,  changes the conversation in a very meaningful way.  We should worry less about whether AI will out-think humans and worry more about how to balance its benefits with how it (and the corporate structure behind it) affects the planet--and the (non) regulation of AI, at least here in the US, is certainly not centered on justice. 

Follow the Topic

Sustainability
Research Communities > Community > Sustainability
Ethics of Technology
Humanities and Social Sciences > Philosophy > Moral Philosophy and Applied Ethics > Ethics of Technology
Bioethics
Humanities and Social Sciences > Philosophy > Moral Philosophy and Applied Ethics > Bioethics
Public Health
Life Sciences > Health Sciences > Public Health
Artificial Intelligence
Mathematics and Computing > Computer Science > Artificial Intelligence
Philosophy of Artificial Intelligence
Humanities and Social Sciences > Philosophy > Philosophy of Science > Philosophy of Technology > Philosophy of Artificial Intelligence
  • AI & SOCIETY AI & SOCIETY

    This journal focuses on societal issues including the design, use, management, and policy of information, communications and new media technologies, with a particular emphasis on cultural, social, cognitive, economic, ethical, and philosophical implications.

Related Collections

With Collections, you can get published faster and increase your visibility.

AI in Asia: Social and Ethical Concerns

This collection is opened to Asian Social and Ethical concerns related to AI. Asia is a large continent, with highly diverse religious and cultural systems, variations in industrial development, differing approaches to both government and governance, and unique historical and contemporary geopolitical dynamics that deserve to be treated independent of European or North American (or comparatively with African or Latin American) issues. It is also an important question because Asia is the most populated continent.

Existing studies on intercultural ethics of AI (Hongladarom and Bandasak 2024), comparative governance systems (Hine and Floridi 2024; Okuno and Okuno 2025), and postcolonial studies of AI (Ofosu-Asare 2025; Rodríguez 2025; Hassan 2023) highlight the importance of this work and the need for further studies. While comparative approaches are welcome, this topical collection seeks original research that attends specifically to the Asian context, understood both broadly across the region and within specific Asian societies. To this end, we encourage inter-cultural comparison within the Asian continent, with the special issue serving as a meta-comparative collection on Asian AI concerns. We encourage interdisciplinary approaches to studies along three broad branches: 1) philosophical and cultural considerations rooted in Asian perspectives; 2) political and social AI issues unique to specific Asian societies; 3) region-specific, trans-national challenges raised by AI.

Please consult the detailed call for papers at https://link.springer.com/journal/146/updates/27806474before submitting

Publishing Model: Hybrid

Deadline: Feb 28, 2026

Ethics and Autonomous Vehicles: Understanding the ethical issues that arise with the introduction of self-driving and highly automated Vehicles

Understanding the societal and ethical implications of AI systems such as autonomous vehicles inherently involves many concerns: the nature and capabilities of these technologies, how humans can and should use them, how humans will respond to the presence of AVs in the traffic stream and the AV technology’s impact on socioeconomic structures. Thus, producing new knowledge in this area requires the expertise of multiple disciplines and this collection of AI & Society seeks to serve as a crucial incubator for evidence and ideas pertaining to this domain of AI.

Abstract submissions due: 30th August 2025

For inquiries and to submit your abstract, please contact: Aisocietyncstate@gmail.com with the subject “AI&S Special issue on Ethics and AVs.”

Publishing Model: Hybrid

Deadline: Dec 31, 2025