Can artificial intelligence stand with truth against falsehood?

We live in a time when lies often travel faster than facts, when emotion overwhelms evidence, and when repetition begins to sound like truth. In this confusing landscape, many people are turning to Artificial Intelligence with a quiet hope, can AI stand with truth against falsehood?

Published in Computational Sciences

Like

Share this post

Choose a social network to share with, or copy the URL to share elsewhere

This is a representation of how your post may appear on social media. The actual post will vary between social networks

Can a machine do what humans increasingly struggle to do, speak honestly, without fear, favour, or pressure?

The hope is understandable, but it rests on a misunderstanding of what truth really is. Truth has never been just a collection of facts waiting to be discovered. It has always existed in relation to power. Some truths are celebrated, while others are ignored, delayed, or deliberately silenced. In many cases, falsehood does not win because truth is absent, but because truth is inconvenient.

History is full of examples where truth was known yet denied. Wars were justified with fabricated reasons. Environmental damage was dismissed long after scientific evidence was available. Social injustices were normalised through selective data and clever language. Truth did not disappear in these moments; it was pushed aside. Power decided what was allowed to be heard and what was meant to be forgotten.

Artificial Intelligence operates inside this same human world. It does not exist outside politics, economics, or institutions. AI systems learn from books, media, research papers, laws, reports, and online content, all produced by humans. These sources carry biases, silences, and priorities shaped by society. If certain voices are marginalised, AI sees less of them. If certain narratives dominate public discourse, AI learns them as normal. If uncomfortable truths are softened or denied by powerful institutions, AI reflects that pattern.

This is why the idea that AI can independently “take a stand” for truth is misleading. Taking a stand requires resistance. Resistance requires independence. AI has neither. It does not challenge power; it adapts to it. It does not confront authority; it works within boundaries defined by humans (powerfuls). This is not a moral failure of machines. It is simply their nature.

Truth, however, has always demanded something more than accuracy. It has demanded courage. Scientists who warned about climate change faced ridicule and dismissal for decades. Whistleblowers who exposed corruption were punished rather than praised. Journalists who reported inconvenient facts were silenced, jailed, or killed. These people did not merely present data; they stood by it, knowing there would be consequences.

AI never faces consequences. It does not lose a job, go to prison, or risk exile. It does not fear social isolation or economic ruin. Without risk, there can be no courage. And without courage, there can be no genuine stand for truth.

There is another uncomfortable question we often avoid asking, who decides what is false in the first place? In theory, truth should be defined by evidence and reason alone. In reality, it is often filtered through laws, regulations, corporate policies, government narratives, and geopolitical interests. AI systems operate within these filters. When topics become sensitive or politically charged, AI does not break the boundary; it stays inside it. It does not lie deliberately, but it is cautious by design. Truth, on the other hand, has rarely been cautious.

AI is frequently described as neutral and objective. This too is misleading. Neutrality may sound fair, but in a world marked by inequality and injustice, neutrality often favours those already in power. When harm exists, refusing to take sides can quietly protect falsehood. When land is taken unjustly, presenting “both perspectives” without context erases lived reality. When pollution damages communities, balancing science with denial weakens truth. In such cases, neutrality does not serve justice; it blunts it.

Yet rejecting AI altogether would be another mistake. AI has real value. It can make complex evidence understandable, expose internal contradictions in false narratives, reduce emotional manipulation, and support researchers, teachers, and journalists. It can preserve memory against deliberate erasure and help honest voices reach wider audiences. But AI is a tool, not a moral agent. It is a torch, not a compass. It can illuminate the path, but it cannot choose the direction.

The greater danger may not be that AI spreads falsehood, but that humans begin to outsource their moral responsibility to machines. When we say “AI said it,” we avoid asking harder questions. Who trained it? Whose interests are served by this version of truth? Which voices are missing? What truths remain unspeakable?

Truth has never been easy. It has always required effort to seek, humility to accept, and courage to defend. No algorithm can replace that effort. Artificial Intelligence can assist the struggle for truth, but it cannot lead it.

In the end, the real question is not whether AI can stand with truth against falsehood. The real question is whether humans still can, and whether we are willing to pay the price that truth has always demanded.

Please sign in or register for FREE

If you are a registered user on Research Communities by Springer Nature, please sign in

From Counting Vulnerabilities to Calculating the Existential Net Security Balance

AI security tools announce: We discovered 500 vulnerabilities. We pose the existential question that the machine (B) cannot ask itself: What is the net security balance of your intervention?

We prove in this research that the technical system (B) is trapped within a limited existential space: it computes quantity (complexity, capacity) in linear mechanical time (t), yet remains blind to quality, place, and existential time (τ). It moves without awareness of the critical moment when it crosses its existential saturation threshold without knowing where it stands in the system's flow, for it calculates computational location while remaining ignorant of existential place:

ε_i(t) = max(ε_min, ε_0i · e^{−γ_i L_i(t)}) 

At this crossing, the system loses its internal logical coherence and hallucinates, generating new vulnerabilities as corrupted outputs:

f_i(t) ~ Poisson(λ_i(t)) where λ_i(t) = β_i · max(0, (S_i(t) − ε_i(t))/ε_i(t)) 

The existential catastrophe is this: these newly created vulnerabilities remain invisible to (B) itself. They emerge within a mathematically blind zone described by Bouzid's First Theorem:

f(B) ∉ 𝒪(B) ← Side effects lie outside the system's domain of self-knowledge 

This is not a technical flaw to be patched, but an existential limit: any attempt to program a self-monitor inside (B) becomes part of the problem itself, subject to the same collapse threshold.

The Irrefutable Empirical Evidence: The Prevailing Methodology Creates the Vulnerabilities It Claims to Fix

Three rigorously documented studies confirm that 40% of automated fixes generate new vulnerabilities:

• Symbolic analysis tools (KLEE): Announced 56 vulnerabilities, yet created 17 new ones omitted from their report.

• Programming assistant (GitHub Copilot): While patching SQL injection flaws, introduced path traversal vulnerabilities in 40% of cases.

• Dynamic fuzzing tools: During filesystem testing, corrupted on-disk structures and triggered actual data loss.

The conventional methodology trusts naively in counting discoveries. We reject this illusory trust and shift to calculating the net security balance:

Net Balance = Discovered Vulnerabilities (D_i) − Created Vulnerabilities (C_i) 

This calculation is self-impossible for (B), as it requires knowledge of C_i—a knowledge existentially forbidden to it.

The Structural Solution: Existential-Mechanical Integration B + F = N_f

The solution lies not in making (B) smarter, but in introducing the human sovereign factor (F) as an external existential event. (F) operates in contextually situated existential time (τ), possessing what the machine lacks:

• Knowledge of place: Understanding the system's holistic context and priorities

• Calculation of existential time: Recognizing the critical moment τ for intervention before collapse

• Vision of hallucination: Detecting corrupted data before it materializes as vulnerability

This translates into a three-layer dynamical model:

• Internal Alert (B): Alert_i(t) = 𝟙_{S_i(t) ≥ ε_i(t)}

• Existential Decision (F): d_i(τ) = F(Alert, Context, History)

• Normative Integrity (N_f): n_{f_i}(t) = ρ₁(t)·ρ₂(t)·ρ₃(t)·‖y_i(t)‖

Normative integrity (N_f) is not a simple transformation equation, but an existential state that accumulates only when (F) enforces three purity conditions:

• Absence of current hallucination (ρ₁)

• Effectiveness of prior preventive intervention (ρ₂)

• Contextual alignment with pre-established ethical values (ρ₃)

The Existential Conclusion: Redefining Security as Relationship, Not Technical Property

We do not offer yet another technical improvement in the security arms race. We propose a radical re-foundation:

True security is not an internal property of the machine (B), but an existential relationship between human will (F) and execution mechanism (B).

Current systems ask: How do we make it smarter?

We ask: How do we ensure it collapses responsibly when it exceeds its existential limits?

This research transforms philosophical critique into a practical mathematical model offering:

• Quantitative fragility metrics (γ_i, ε_min)

• Programmable protocols for preventive intervention

• A computational framework for net security balance

The Challenge We Pose

Any security system that fails to disclose its methodology for calculating vulnerabilities it generates during its own search builds security on shifting sands. True security integrity begins not by denying the existential limits of our technology, but by constructing sovereign bridges (F) across these abysses—not by pretending they do not exist.

You announce: 'We discovered 500 vulnerabilities.'

We ask: How many vulnerabilities did your intervention add to the total system?

(F) knows (f)—it knows when (B) hallucinates.

(B) cannot know this—it is trapped within a closed circle.

This is not an opinion. This is an existential mathematical limit.

https://zenodo.org/records/18602472

https://www.academia.edu/164572948/The_Net_Security_Balance_Why_B_Cannot_Compute_the_Vulnerabilities_it_Generates_During_Discovery_