Planetary Health in the Age of Artificial Intelligence: A Structural–Ethical Inquiry into Sustainable AI in Healthcare

Bringing together perspectives from medicine and philosophy, I explore how ideas from sustainable AI and planetary health meet. Inspired by the structural turn in AI ethics, this reflection asks how healthcare technologies can promote wellbeing without undermining the systems that sustain it.
Planetary Health in the Age of Artificial Intelligence: A Structural–Ethical Inquiry into Sustainable AI in Healthcare
Like

Share this post

Choose a social network to share with, or copy the URL to share elsewhere

This is a representation of how your post may appear on social media. The actual post will vary between social networks

Explore the Research

SpringerLink
SpringerLink SpringerLink

Planetary health in the age of artificial intelligence: a structural–ethical inquiry into sustainable AI in healthcare - AI & SOCIETY

The growing planetary poly-crisis—marked by climate change, ecological degradation, and planetary health inequities—necessitates a fundamental transformation in healthcare systems. This paper examines the historical progression of health systems from public health to planetary health, where human well-being is understood within the limits of planetary boundaries. Planetary health provides a framework for sustainable health systems that reduce disease burden, optimize care delivery, and decarbonize healthcare operations. Within this context, artificial intelligence (AI) emerges as a transformative tool for addressing healthcare sustainability. AI applications offer significant potential to optimize resource use, improve preventive care, and enable circular economies in healthcare, thereby facilitating the transition toward planetary health. However, AI poses environmental challenges, including high energy consumption and resource-intensive development processes. To ensure its long-term viability, AI must align with sustainability principles by minimizing its ecological footprint while enhancing ecological integrity and social justice. This paper contributes to the literature in three key ways: (1) it briefly outlines the evolution of health systems toward planetary health, contextualizing the role of sustainability; (2) it presents emerging AI applications that support healthcare sustainability within the planetary health paradigm; and (3) it applies a structural–ethical approach—building on the work of Bolte and van Wynsberghe—to situate AI within the broader socio-technical, institutional, and ecological systems of healthcare, moving beyond second-wave AI ethics’ focus on isolated artifacts and design-level principles. By positioning AI as both a solution and a subject of scrutiny, this study advances the discourse on AI’s potential for planetary health while critically examining its alignment with long-term sustainability goals.

From Medicine to Philosophy: The Beginning of a Question

My journey toward this paper began in the clinic. As a medical doctor, I was fascinated early in my studies by how differently health could be understood—sometimes as the absence of disease, sometimes as balance, sometimes as the flourishing of a whole person in relation to their environment. That diversity of meanings stayed with me, especially as I witnessed how health systems often reduced care to measurable outcomes and technologies.

During my time at the University of Vienna’s Department of Philosophy, where I helped organize a conference titled “AI and the Planet in Crisis.” It was at that event that Aimee van Wynsberghe presented her paper on the third wave of AI ethics and the structural turn. Their argument—that ethics must move beyond principles like fairness and transparency to examine the socio-technical and ecological structures sustaining AI—deeply resonated with me.

Listening to her, I began to see a bridge between two worlds I knew well: medicine and philosophy. Sustainability, which had become central in environmental and health discourses, could also serve as the link between planetary health and structural AI ethics. I realized that if AI was becoming integral to healthcare, then the ethics of AI could not remain detached from the ecological and systemic conditions of health itself. That realization became the seed of this paper.

Rethinking Health in a Time of Polycrisis

The writing began with an unease about the contradictions of our time. While healthcare systems aim to protect human well-being, they contribute significantly to greenhouse-gas emissions and resource depletion. Meanwhile, AI technologies—celebrated as tools for efficiency and progress—depend on energy-hungry computation and extractive supply chains. I wanted to ask: What does it mean to promote health when our tools of care harm the planet that sustains life?

The idea of planetary health provided the conceptual foundation to explore this question. It reframes human well-being as inseparable from ecological stability. Unlike global health, which focuses on cross-border disease control, planetary health emphasizes operating within planetary boundaries—the climatic and biological limits necessary for life to flourish.

This framework transformed my perspective on AI in medicine. It was no longer enough to evaluate AI as an instrument that improves diagnosis or efficiency. We also needed to examine its environmental cost, its labor implications, and the power structures embedded in its infrastructures.

From Public Health to Planetary Health

Tracing the history of health systems helped clarify this trajectory. Public health arose to manage local epidemics and sanitation; global health addressed inequalities between nations; One Health recognized the interconnectedness of humans, animals, and ecosystems. Planetary health extended these ideas by embedding them within ecological limits and justice concerns.

What distinguishes planetary health to me is its insistence that sustainability is not an optional goal but a moral and operational prerequisite for any healthcare system. This insight reframed my understanding of AI: if the health sector is responsible for planetary well-being, then the digital infrastructures supporting it must also meet ecological and ethical standards.

The Structural–Ethical Perspective

The encounter with van Wynsberghe and Bolte’s work gave me a vocabulary to articulate this insight. Their notion of a structural turn in AI ethics argues that ethical evaluation must include the material, political, and ecological systems in which technologies are embedded.

Applying that perspective to healthcare revealed that AI is not a neutral instrument but a structural actor—it shapes clinical priorities, redistributes resources, and influences who receives care and who does not. Moreover, its material existence—servers, data centers, rare-earth minerals—binds it to planetary processes.

This realization turned the paper into a dialogue between planetary health and sustainable AI. I began to see AI as both a potential solution and a source of unsustainability, demanding a more systemic and justice-oriented approach.

Between Promise and Paradox

AI offers extraordinary possibilities for sustainability in healthcare. Predictive models can forecast disease outbreaks and improve prevention. Smart hospital systems can reduce energy consumption. Circular supply chains, aided by AI analytics, can minimize medical waste.

Yet, behind these successes lie paradoxes. Training large models consumes massive electricity. Data centers rely on carbon-intensive grids. Rare minerals used in AI hardware are mined under exploitative conditions. The result is an ethical contradiction: we use energy-hungry algorithms to solve the consequences of an energy-hungry world.

This contradiction became the central ethical tension of the paper: Can AI genuinely advance planetary health without reproducing the very harms it aims to address?

Justice at the Core of Sustainability

To answer, I adopted what I called a structural–ethical approach to sustainable AI. It moves beyond counting emissions to ask whose environments, bodies, and futures bear the costs of AI innovation.

For example, telemedicine may reduce travel emissions, yet it increases digital energy use and may exclude populations without reliable internet access. Efficiency gains in one context may create new inequities in another. True sustainability, I argue, must therefore be justice-centered—addressing distributive, procedural, and recognition justice alike.

This means ensuring equitable access to low-energy AI tools, fair labor conditions in data supply chains, and participatory governance where communities have a voice in technological decisions that affect their health.

Governance for a Planetary Future

Sustainable AI cannot rely on voluntary ethics or corporate responsibility alone. It requires governance architectures that align innovation with ecological limits. Most current frameworks, including the EU AI Act, still prioritize innovation and safety over environmental impact.

Drawing on planetary health ethics, I proposed that AI governance incorporate lifecycle accountability—from resource extraction to e-waste disposal—and embed sustainability metrics similar to environmental, social, and governance (ESG) criteria. Moreover, governance must become participatory: clinicians, engineers, ethicists, and affected communities should co-shape how AI is deployed in healthcare.

Such reforms would turn sustainability from a compliance target into a normative lens guiding the design and regulation of digital health systems.

Writing Between Two Worlds

Writing this paper felt like moving between two intellectual homes. In medicine, I had learned to value precision, outcomes, and practicality. In philosophy, I learned to question assumptions, expose structures, and seek meaning. Bridging these traditions allowed me to treat AI not merely as a technical artifact but as part of the moral and ecological fabric of healthcare.

The process also brought a personal rea. I began to see the practice of medicine itself as an ecological act—one that should heal without harming. Planetary health and structural AI ethics together offer a way to reimagine what it means to care: not only for patients but for the planet that makes healing possible.

Looking Ahead

This research is only a beginning. Future work must develop lifecycle metrics for AI, justice-oriented governance models, and ways to embed planetary health principles directly into AI design.

Ultimately, the question that guided me as a physician remains the same as the one I now ask as a philosopher: How can care endure? To ensure that both humanity and the planet thrive, our technologies must learn to care, too—sustainably, justly, and within the limits of the world that gives them life.

Please sign in or register for FREE

If you are a registered user on Research Communities by Springer Nature, please sign in

Go to the profile of David Haney
2 months ago

The notion of AI as a structural actor, as opposed to a para-human agent,  changes the conversation in a very meaningful way.  We should worry less about whether AI will out-think humans and worry more about how to balance its benefits with how it (and the corporate structure behind it) affects the planet--and the (non) regulation of AI, at least here in the US, is certainly not centered on justice. 

Follow the Topic

Sustainability
Research Communities > Community > Sustainability
Ethics of Technology
Humanities and Social Sciences > Philosophy > Moral Philosophy and Applied Ethics > Ethics of Technology
Bioethics
Humanities and Social Sciences > Philosophy > Moral Philosophy and Applied Ethics > Bioethics
Public Health
Life Sciences > Health Sciences > Public Health
Artificial Intelligence
Mathematics and Computing > Computer Science > Artificial Intelligence
Philosophy of Artificial Intelligence
Humanities and Social Sciences > Philosophy > Philosophy of Science > Philosophy of Technology > Philosophy of Artificial Intelligence
  • AI & SOCIETY AI & SOCIETY

    This journal focuses on societal issues including the design, use, management, and policy of information, communications and new media technologies, with a particular emphasis on cultural, social, cognitive, economic, ethical, and philosophical implications.

What are SDG Topics?

An introduction to Sustainable Development Goals (SDGs) Topics and their role in highlighting sustainable development research.

Continue reading announcement

Related Collections

With Collections, you can get published faster and increase your visibility.

Ethics and Autonomous Vehicles: Understanding the ethical issues that arise with the introduction of self-driving and highly automated Vehicles

Understanding the societal and ethical implications of AI systems such as autonomous vehicles inherently involves many concerns: the nature and capabilities of these technologies, how humans can and should use them, how humans will respond to the presence of AVs in the traffic stream and the AV technology’s impact on socioeconomic structures. Thus, producing new knowledge in this area requires the expertise of multiple disciplines and this collection of AI & Society seeks to serve as a crucial incubator for evidence and ideas pertaining to this domain of AI.

Abstract submissions due: 30th August 2025

For inquiries and to submit your abstract, please contact: Aisocietyncstate@gmail.com with the subject “AI&S Special issue on Ethics and AVs.”

Publishing Model: Hybrid

Deadline: Dec 31, 2025

Cultural Workers and Generative AI

Since the unprecedented agreement that the Writers Guild of America (WGA) managed to negotiate in relation to the use of generative AI in the workplace in 2023, cultural workers—in sectors such as music, film and television, journalism, social media content creation and gaming have been in the spotlight as one of the main exponents of how workers, individually and collectively, have responded to the development of generative AI around the world. These issues range from questions of workforce replacement and the reshaping of labor processes, working conditions, forms of building collectivities (e.g. unions, associations, cooperatives, guilds) and how cultural workers have understood the meanings and practices of AI (e.g. culturally, discursively and politically).

This topical collection of AI & Society (AI&S) focuses on how workers in the cultural sector—understood as actors, writers, musicians, game performers, journalists, content creators, etc.—are engaging with generative AI in the workplace. It aims to analyze, on the one hand, the ways that cultural labor is being reshaped by AI in terms of labor process and cultures of production, and, on other hand, the ways that cultural workers are collectively fighting back against AI, through bargaining, co-operative formation or refusal. We are looking for articles that centre workers and work experience in relation to AI around the world. The collection will include empirically-grounded articles with original arguments covering different geographies and sectors.

Please consult the detailed call for papers before submitting at https://link.springer.com/journal/146/updates/27770372

Publishing Model: Hybrid

Deadline: Jan 15, 2026