Escaping the “Interpreter’s Trap”: When Explainable AI Fails to Protect Justice
Published in Computational Sciences, Law, Politics & International Studies, and Philosophy & Religion

My investigation led me to the concept of "The Interpreter’s Trap." In studying risk assessment tools like COMPAS, I realized that the problem wasn't merely technical opacity, but a structural "institutional double bind." Decision-makers are often caught between "contaminated objectivity"—data carrying hidden historical biases and proxy variables—and "eroded subjectivity," where their own professional discretion is devalued against the machine’s "evidence-based" output.
Crucially, my research suggests that this trap is sustained by a mechanism of "liability shielding." I found that decision-makers face an asymmetry of risk: aligning with a high-risk score creates a safe harbor of "scientific" justification, whereas deviating from the algorithm imposes a heavy personal burden of proof. In this high-stakes environment, post-hoc XAI often fails to empower human oversight. Instead, it creates "convenient narratives"—simplified rationales like "prior arrests"—that anchor the judge’s decision to the algorithm. This effectively converts uncertainty into institutionally defensible justifications, facilitating what I term "accountability washing."
As this was my first peer-reviewed article, the research process itself was a steep learning curve. While the initial drafts were unpolished, rigorous feedback from reviewers helped me refine my critique from simple technological skepticism into a robust socio-legal framework. This process taught me that the "trap" is not just about the algorithm, but about the specific legal and organizational structures that incentivize deference to machines.
Ultimately, I argue that we should aspire to a higher standard of system design. We must pierce the "Complexity Illusion"—the false assumption that opaque, complex models are inherently more accurate. For high-stakes normative decisions, "explaining" a black box is often insufficient. I advocate for a paradigm shift toward inherently interpretable "glass-box" models—systems designed to be transparent from the ground up, where the logic is visible and contestable by default.
However, this is not just a technical challenge—it is a legal and normative one too. Transparency alone is meaningless if the human in the loop cannot act upon it. My paper argues that true "Human Oversight," as envisioned in emerging frameworks like the EU AI Act, requires more than just receiving an explanation; it requires the effective power to disagree. We must couple interpretable models with a robust "Right to Contest," creating a socio-legal infrastructure where algorithmic outputs are treated not as objective verdicts, but as contestable evidence. Without this normative shift, even the most transparent model risks becoming another tool for bureaucratic validation rather than justice.
As an early-career researcher, I hope this paper contributes to the ongoing conversation about how we can build AI systems that support, rather than supplant, human ethical judgment.
Read the full paper: https://rdcu.be/e2MXo
Follow the Topic
-
AI & SOCIETY
This journal focuses on societal issues including the design, use, management, and policy of information, communications and new media technologies, with a particular emphasis on cultural, social, cognitive, economic, ethical, and philosophical implications.
Related Collections
With Collections, you can get published faster and increase your visibility.
AI in Asia: Social and Ethical Concerns
This collection is opened to Asian Social and Ethical concerns related to AI. Asia is a large continent, with highly diverse religious and cultural systems, variations in industrial development, differing approaches to both government and governance, and unique historical and contemporary geopolitical dynamics that deserve to be treated independent of European or North American (or comparatively with African or Latin American) issues. It is also an important question because Asia is the most populated continent.
Existing studies on intercultural ethics of AI (Hongladarom and Bandasak 2024), comparative governance systems (Hine and Floridi 2024; Okuno and Okuno 2025), and postcolonial studies of AI (Ofosu-Asare 2025; Rodríguez 2025; Hassan 2023) highlight the importance of this work and the need for further studies. While comparative approaches are welcome, this topical collection seeks original research that attends specifically to the Asian context, understood both broadly across the region and within specific Asian societies. To this end, we encourage inter-cultural comparison within the Asian continent, with the special issue serving as a meta-comparative collection on Asian AI concerns. We encourage interdisciplinary approaches to studies along three broad branches: 1) philosophical and cultural considerations rooted in Asian perspectives; 2) political and social AI issues unique to specific Asian societies; 3) region-specific, trans-national challenges raised by AI.
Please consult the detailed call for papers at https://link.springer.com/journal/146/updates/27806474before submitting
Publishing Model: Hybrid
Deadline: Feb 28, 2026
Please sign in or register for FREE
If you are a registered user on Research Communities by Springer Nature, please sign in