Why AI Fails at Lawmaking: A Journey Through Judgment, Justice, and Human Limits

Can a machine make law without understanding justice? In this reflection on my recent article, I explore why AI fails at lawmaking—not due to technical limits, but because it cannot judge, imagine, or be accountable. Law demands context. And context demands humanity.

Published in Philosophy & Religion

Why AI Fails at Lawmaking: A Journey Through Judgment, Justice, and Human Limits
Like

Share this post

Choose a social network to share with, or copy the URL to share elsewhere

This is a representation of how your post may appear on social media. The actual post will vary between social networks
A hand holding Montesquieu’s "The Spirit of the Laws" in front of blooming coral-pink flowers on a green campus lawn.

Reading Montesquieu on campus — a reminder that law must be rooted in context, not code.

What happens when we ask a machine to make law?

That question lingered at the periphery of my academic life for years — quiet but persistent. It surfaced in seminars, resurfaced in courtrooms, and returned, uninvited, in moments of stillness. It asked not merely about the role of AI in governance, but something far deeper: can a system without conscience ever deliver justice?

The paper, Context Matters: Why AI Fails at Lawmaking, was born from that question — and from the slow, necessary realization that law is not an algorithm, and never has been.

The Moral Imagination of Governance

Artificial intelligence, with its dazzling promise of speed and scale, has become an unlikely legislator. From the automation of policy review to predictive judgments, algorithmic systems are being folded into the spine of statecraft. The appeal is understandable: fewer costs, faster decisions, cleaner workflows.

But what do we lose when the language of justice is rewritten by systems that do not — and cannot — understand its moral weight?

The law is not a dataset. It is a living record of conflict, negotiation, grief, and repair. It holds within it the sediment of social struggle, the hesitation of ethical doubt, and the memory of dissent. Governance, if it is to retain its legitimacy, must be morally interpretable. Yet AI, built on probabilistic reasoning, offers only reflection — not interpretation. It replicates the past; it does not respond to the present.

Law Is Not a Pattern — It Is a Struggle

One cannot study law for long without confronting its essential paradox: it must be stable enough to protect, and fluid enough to adapt. That paradox is precisely where AI fails.

Take the COMPAS algorithm — a risk assessment tool used in the U.S. criminal justice system. Its outputs, praised for precision, were found to disproportionately flag Black defendants as high-risk. The issue wasn’t technical. It was moral. The algorithm had inherited the biases buried in historical data and returned them with statistical confidence. AI does not err as humans do — it calcifies what history has already failed to correct.

This failure is not incidental. It is structural. AI does not discern. It does not interpret. And interpretation is the heart of law.

A Glimpse of Judgment: Garrow’s Law

It was during the late stretch of this inquiry that I returned to Garrow’s Law — the BBC courtroom drama based on the life of 18th-century barrister William Garrow. I say “returned,” because it had first been introduced to me by my supervisor, Professor Suma Athreye, whose keen sense of intellectual storytelling saw in it a mirror to the very questions I was pursuing. Watching Garrow advocate not just within the law but against it — challenging custom, cross-examining power, and reclaiming the voice of the accused — I came to understand something essential: that law is never neutral performance. It is ethical risk, narrated in public.

AI, for all its linguistic fluency, cannot pause to listen, hesitate in doubt, or act in defiance of pattern. It cannot advocate as Garrow did — with moral imagination, strategic improvisation, and the courage to speak in the face of institutional silence. It can emulate legal syntax. But it cannot perform judgment.

Why Context Is the First Principle

In this paper, I draw from thinkers like Aristotle, whose notion of phronesis (practical wisdom) reminds us that equity lies in context-sensitive judgment, not in universal rule. I revisit Dworkin, who saw law as a coherent narrative of moral principles, not as an inert system of commands. I engage with Montesquieu, whose vision of law as a reflection of the “spirit” of a people — not just a structure of control — has long guided my thinking, even on quiet afternoons reading The Spirit of the Laws on campus. And I return to H.L.A. Hart, whose distinction between primary and secondary rules helps frame the problem: AI can replicate the structure of law, but not the legitimacy of it.

It became clear that AI’s problem is not just that it lacks transparency — it lacks telos. It has no purpose beyond pattern. And that is not enough to govern human lives.

Writing the Paper: A Personal Reckoning

This paper emerged from a long arc of inquiry across law, philosophy, and the critical study of technology. It was shaped decisively by my academic journey, and by the rare intellectual generosity of Professor Suma Athreye whose rigorous mentorship urged me to pursue not the convenient argument, but the necessary one. What began as a critique of computational governance evolved into an inquiry into the moral architecture of law itself.

In every sentence, I sought to balance accessibility with gravity. I wanted the piece to be readable to the non-specialist — but I also wanted it to demand something: a pause, a question, perhaps even a refusal.

The Challenge Ahead

The question, ultimately, is not whether AI should have a role in governance. It already does. The challenge is to limit its domain — to ensure it informs, but never substitutes. The risk is not that machines will replace lawyers or judges, but that we will let them replace our struggle with judgment.

Governance is not about procedural mimicry. It is about ethical reckoning. It is the patient, painful act of deciding what kind of world we wish to live in — and who we are willing to be responsible to.

No machine can do that. And no machine should be asked to.

References

Aristotle (2009). Nicomachean Ethics (Trans. David Ross). Oxford University Press.

Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine Bias. ProPublica.

https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

Dworkin, R. (1986). Law’s Empire. Harvard University Press.

Hart, H. L. A. (1961). The Concept of Law. Oxford University Press.

Hobbes, T. (1651/1996). Leviathan (Ed. Richard Tuck). Cambridge University Press.

Montesquieu (1989). The Spirit of the Laws (Ed. Anne M. Cohler, Basia C. Miller, and Harold S. Stone). Cambridge University Press.

Olivia Ruhil (2025). Context Matters: Why AI Fails at Lawmaking. AI & Society, Springer Nature.

https://link.springer.com/article/10.1007/s00146-025-02357-z

Garrow’s Law (2009–2011). BBC Television Series. Creator: Tony Marchant. Production: BBC One.

Read the full article:

Context Matters: Why AI Fails at Lawmaking

AI & Society (Springer Nature), 2025

Author:

Olivia Ruhil

School of Public Policy,

Indian Institute of technology, Delhi

olivia.ruhil21@nludelhi.ac.in

 

Please sign in or register for FREE

If you are a registered user on Research Communities by Springer Nature, please sign in

Follow the Topic

Philosophy of Artificial Intelligence
Humanities and Social Sciences > Philosophy > Philosophy of Science > Philosophy of Technology > Philosophy of Artificial Intelligence
SDG 16: Peace, Justice and Strong Institutions
Research Communities > Community > Sustainability > UN Sustainable Development Goals (SDG) > SDG 16: Peace, Justice and Strong Institutions
  • AI & SOCIETY AI & SOCIETY

    This journal focuses on societal issues including the design, use, management, and policy of information, communications and new media technologies, with a particular emphasis on cultural, social, cognitive, economic, ethical, and philosophical implications.

Related Collections

With Collections, you can get published faster and increase your visibility.

Ethics and Autonomous Vehicles: Understanding the ethical issues that arise with the introduction of self-driving and highly automated Vehicles

Understanding the societal and ethical implications of AI systems such as autonomous vehicles inherently involves many concerns: the nature and capabilities of these technologies, how humans can and should use them, how humans will respond to the presence of AVs in the traffic stream and the AV technology’s impact on socioeconomic structures. Thus, producing new knowledge in this area requires the expertise of multiple disciplines and this collection of AI & Society seeks to serve as a crucial incubator for evidence and ideas pertaining to this domain of AI.

Abstract submissions due: 30th August 2025

For inquiries and to submit your abstract, please contact: Aisocietyncstate@gmail.com with the subject “AI&S Special issue on Ethics and AVs.”

Publishing Model: Hybrid

Deadline: Dec 31, 2025

Cultural Workers and Generative AI

Since the unprecedented agreement that the Writers Guild of America (WGA) managed to negotiate in relation to the use of generative AI in the workplace in 2023, cultural workers—in sectors such as music, film and television, journalism, social media content creation and gaming have been in the spotlight as one of the main exponents of how workers, individually and collectively, have responded to the development of generative AI around the world. These issues range from questions of workforce replacement and the reshaping of labor processes, working conditions, forms of building collectivities (e.g. unions, associations, cooperatives, guilds) and how cultural workers have understood the meanings and practices of AI (e.g. culturally, discursively and politically).

This topical collection of AI & Society (AI&S) focuses on how workers in the cultural sector—understood as actors, writers, musicians, game performers, journalists, content creators, etc.—are engaging with generative AI in the workplace. It aims to analyze, on the one hand, the ways that cultural labor is being reshaped by AI in terms of labor process and cultures of production, and, on other hand, the ways that cultural workers are collectively fighting back against AI, through bargaining, co-operative formation or refusal. We are looking for articles that centre workers and work experience in relation to AI around the world. The collection will include empirically-grounded articles with original arguments covering different geographies and sectors.

Please consult the detailed call for papers before submitting at https://link.springer.com/journal/146/updates/27770372

Publishing Model: Hybrid

Deadline: Jan 15, 2026