Why AI Fails at Lawmaking: A Journey Through Judgment, Justice, and Human Limits
Published in Philosophy & Religion

Explore the Research

Context matters: why AI fails at lawmaking - AI & SOCIETY
AI & SOCIETY -

Reading Montesquieu on campus — a reminder that law must be rooted in context, not code.
What happens when we ask a machine to make law?
That question lingered at the periphery of my academic life for years — quiet but persistent. It surfaced in seminars, resurfaced in courtrooms, and returned, uninvited, in moments of stillness. It asked not merely about the role of AI in governance, but something far deeper: can a system without conscience ever deliver justice?
The paper, Context Matters: Why AI Fails at Lawmaking, was born from that question — and from the slow, necessary realization that law is not an algorithm, and never has been.
The Moral Imagination of Governance
Artificial intelligence, with its dazzling promise of speed and scale, has become an unlikely legislator. From the automation of policy review to predictive judgments, algorithmic systems are being folded into the spine of statecraft. The appeal is understandable: fewer costs, faster decisions, cleaner workflows.
But what do we lose when the language of justice is rewritten by systems that do not — and cannot — understand its moral weight?
The law is not a dataset. It is a living record of conflict, negotiation, grief, and repair. It holds within it the sediment of social struggle, the hesitation of ethical doubt, and the memory of dissent. Governance, if it is to retain its legitimacy, must be morally interpretable. Yet AI, built on probabilistic reasoning, offers only reflection — not interpretation. It replicates the past; it does not respond to the present.
Law Is Not a Pattern — It Is a Struggle
One cannot study law for long without confronting its essential paradox: it must be stable enough to protect, and fluid enough to adapt. That paradox is precisely where AI fails.
Take the COMPAS algorithm — a risk assessment tool used in the U.S. criminal justice system. Its outputs, praised for precision, were found to disproportionately flag Black defendants as high-risk. The issue wasn’t technical. It was moral. The algorithm had inherited the biases buried in historical data and returned them with statistical confidence. AI does not err as humans do — it calcifies what history has already failed to correct.
This failure is not incidental. It is structural. AI does not discern. It does not interpret. And interpretation is the heart of law.
A Glimpse of Judgment: Garrow’s Law
It was during the late stretch of this inquiry that I returned to Garrow’s Law — the BBC courtroom drama based on the life of 18th-century barrister William Garrow. I say “returned,” because it had first been introduced to me by my supervisor, Professor Suma Athreye, whose keen sense of intellectual storytelling saw in it a mirror to the very questions I was pursuing. Watching Garrow advocate not just within the law but against it — challenging custom, cross-examining power, and reclaiming the voice of the accused — I came to understand something essential: that law is never neutral performance. It is ethical risk, narrated in public.
AI, for all its linguistic fluency, cannot pause to listen, hesitate in doubt, or act in defiance of pattern. It cannot advocate as Garrow did — with moral imagination, strategic improvisation, and the courage to speak in the face of institutional silence. It can emulate legal syntax. But it cannot perform judgment.
Why Context Is the First Principle
In this paper, I draw from thinkers like Aristotle, whose notion of phronesis (practical wisdom) reminds us that equity lies in context-sensitive judgment, not in universal rule. I revisit Dworkin, who saw law as a coherent narrative of moral principles, not as an inert system of commands. I engage with Montesquieu, whose vision of law as a reflection of the “spirit” of a people — not just a structure of control — has long guided my thinking, even on quiet afternoons reading The Spirit of the Laws on campus. And I return to H.L.A. Hart, whose distinction between primary and secondary rules helps frame the problem: AI can replicate the structure of law, but not the legitimacy of it.
It became clear that AI’s problem is not just that it lacks transparency — it lacks telos. It has no purpose beyond pattern. And that is not enough to govern human lives.
Writing the Paper: A Personal Reckoning
This paper emerged from a long arc of inquiry across law, philosophy, and the critical study of technology. It was shaped decisively by my academic journey, and by the rare intellectual generosity of Professor Suma Athreye whose rigorous mentorship urged me to pursue not the convenient argument, but the necessary one. What began as a critique of computational governance evolved into an inquiry into the moral architecture of law itself.
In every sentence, I sought to balance accessibility with gravity. I wanted the piece to be readable to the non-specialist — but I also wanted it to demand something: a pause, a question, perhaps even a refusal.
The Challenge Ahead
The question, ultimately, is not whether AI should have a role in governance. It already does. The challenge is to limit its domain — to ensure it informs, but never substitutes. The risk is not that machines will replace lawyers or judges, but that we will let them replace our struggle with judgment.
Governance is not about procedural mimicry. It is about ethical reckoning. It is the patient, painful act of deciding what kind of world we wish to live in — and who we are willing to be responsible to.
No machine can do that. And no machine should be asked to.
References
Aristotle (2009). Nicomachean Ethics (Trans. David Ross). Oxford University Press.
Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine Bias. ProPublica.
https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
Dworkin, R. (1986). Law’s Empire. Harvard University Press.
Hart, H. L. A. (1961). The Concept of Law. Oxford University Press.
Hobbes, T. (1651/1996). Leviathan (Ed. Richard Tuck). Cambridge University Press.
Montesquieu (1989). The Spirit of the Laws (Ed. Anne M. Cohler, Basia C. Miller, and Harold S. Stone). Cambridge University Press.
Olivia Ruhil (2025). Context Matters: Why AI Fails at Lawmaking. AI & Society, Springer Nature.
https://link.springer.com/article/10.1007/s00146-025-02357-z
Garrow’s Law (2009–2011). BBC Television Series. Creator: Tony Marchant. Production: BBC One.
Read the full article:
Context Matters: Why AI Fails at Lawmaking
AI & Society (Springer Nature), 2025
Author:
Olivia Ruhil
School of Public Policy,
Indian Institute of technology, Delhi
olivia.ruhil21@nludelhi.ac.in
Follow the Topic
-
AI & SOCIETY
This journal focuses on societal issues including the design, use, management, and policy of information, communications and new media technologies, with a particular emphasis on cultural, social, cognitive, economic, ethical, and philosophical implications.
Related Collections
With collections, you can get published faster and increase your visibility.
Tacit Engagement Beyond the Digital Age
Further developments in AI agency as seen in ChatGPTs, LLMs and social media platforms, show both potentials and limitations for their benefits to society. We have reached a crossroads in our relations and conceptions of machines and the impacts this is having on us as social and creative beings and on our environment. In contrast to how we anthropomorphised the automaton in the past, we anthropomorphise the ‘interactions’ we have with and via AI tools and robots ascribing to them intelligence, sense-making, and emotion. We may be unknowingly reconfiguring ourselves to fit the AI tools that are promoted to fit to us. We see the consequences of this in society at large, for example, in how our engagement with social media is creating fractures in our societies as AI algorithms are making it hard for us to hear (listen to) and move with expressions of difference, polyvalence.
The dream of automated thinking machines, during the 1980s and 1990s saw a mission to extract tacit knowledge (the knowledge of how to perform skillfully, make judgements, and make decisions) out of the ‘heads’ of ‘experts’ and make it into data for autonomous machine processing. Tacit knowledge was re-framed as a ‘bottleneck’ to be overcome. Data mining of humans, and increasingly of animals, has since been applied to human gesture, behaviour, emotion, bio-medical aspects, and creativity. Looking back, part of the problem is that tacit knowledge has long been considered as lying within the individual (e.g. a composer, a pyschotherapist, a mathematician, a bank manager, etc.), rather than as lying in our engagement with others in world, in rhythm, in dialogue, in culture. In light of this, we can say that “intelligence” is as much a capacity of ensembles as of an individual being.
Calls for greater transparency and visibility of the workings of the ‘black’ or ‘magic’ box of algorithms may themselves not resolve the issues we are facing. What is meant by this, to whom, for whom, and to what purpose? In the magic of Japanese Bunraku Puppet Theatre, the puppet masters and the puppets are all fully transparent to our view, yet we find ourselves attending to the puppets who become visible to us as characters whilst we simultaneously render their puppet masters ‘invisible’. What is needed is values for critical thinking, a diversity of cultural perspectives, and practices, from across the arts and sciences to shape our AI futures, and working situated in lived social and embodied experience, to overcome the belief in the magic machine, and design policies for common good. The arts and humanities are vital in this envisioning to address the dilemmas facing society arising from the civilian deployment of AI technology on governance, safety, reliability, control, social inclusion, cultural identity, ethics, justice, fairness, creativity, and wellbeing. Most fundamentally, we ask, how is the shift from personalization to personification of algorithmic technologies affecting our social ability to navigate indeterminacy- that which cannot be modelled in advance, and deal with that which is not represented, with the tacit, or the excluded. We ask, why have we seen such a proliferation of algorithmic technology and data processing (including “AI”): in the pursuit of certainty and control, and most importantly, how should we govern that proliferation?
We welcome contributions from academics and practitioners across the fields of the arts, social sciences, humanities, healthcare, sciences, and technology to reflect on this discussion and explore ways forward. Rather than transient phenomena or technicalities that may likely be overtaken in due time by engineering or private sector developments, we call attention to concerns that will remain sharp or essential in any case.
Please consult the detailed call for papers before submitting at https://link.springer.com/journal/146/updates/27776684
Publishing Model: Hybrid
Deadline: Nov 01, 2025
Ethics and Autonomous Vehicles: Understanding the ethical issues that arise with the introduction of self-driving and highly automated Vehicles
Understanding the societal and ethical implications of AI systems such as autonomous vehicles inherently involves many concerns: the nature and capabilities of these technologies, how humans can and should use them, how humans will respond to the presence of AVs in the traffic stream and the AV technology’s impact on socioeconomic structures. Thus, producing new knowledge in this area requires the expertise of multiple disciplines and this collection of AI & Society seeks to serve as a crucial incubator for evidence and ideas pertaining to this domain of AI.
Abstract submissions due: 30th August 2025
For inquiries and to submit your abstract, please contact: Aisocietyncstate@gmail.com with the subject “AI&S Special issue on Ethics and AVs.”
Publishing Model: Hybrid
Deadline: Dec 31, 2025
Please sign in or register for FREE
If you are a registered user on Research Communities by Springer Nature, please sign in