When AI reshapes Technology-Facilitated Gender-Based Violence: From Visibility to Synthetic Reality and Epistemic Uncertainty
Published in Philosophy & Religion
A phenomenon that did not start with AI
Technology-Facilitated Gender-Based Violence (TF-GBV) is not a new phenomenon. It has evolved alongside digital communication systems, social media platforms, and mobile technologies that progressively reshaped interpersonal interactions. Online harassment, non-consensual sharing of intimate images, identity theft, stalking, and grooming have long been documented within the digital violence ecosystem.
These forms of violence share a common characteristic: they depend on connectivity and visibility, emerging in environments where interaction, exposure, and accessibility are structurally amplified. However, while TF-GBV is not new, its conditions of production are undergoing a profound transformation with the rise of generative artificial intelligence. This shift is not incremental but structural.
Understanding generative AI without technical language
At a functional level, generative AI systems can be understood through three core capacities:
- Generation: producing new content (text, images, audio, video) that did not previously exist
- Amplification: enabling large-scale, rapid, and repeated content production with minimal effort
- Obfuscation: making the origin, authenticity, and attribution of content increasingly difficult to verify
These three mechanisms, namely generation, amplification, and obfuscation, form the foundation for understanding how AI reshapes TF-GBV.
From recorded reality to manufactured reality : Before AI vs After AI
Historically, digital violence relied on the manipulation, extraction, or redistribution of content rooted in reality, captured images, recorded videos, stolen identities, or real interpersonal interactions. Even when harmful, such content remained anchored in something that had existed. Generative AI disrupts this logic. Harmful content no longer needs to be derived from reality; it can be entirely constructed. Through advances in image, text, and voice generation, AI systems enable the creation of highly realistic yet fully synthetic content: intimate images of individuals who were never photographed in such contexts, cloned voices used for deception or coercion, fabricated conversations simulating interpersonal exchanges, and AI-generated identities deployed for manipulation or grooming. At the same time, algorithmic amplification systems ensure that such content can be disseminated rapidly, at scale, and with limited control once released into digital environments. Together, these capabilities do not simply extend existing forms of violence, they redefine the conditions under which violence is produced and circulated. Violence is no longer constrained by what exists, but by what can be generated.
1. From constrained production to scalable fabrication
Before AI, harmful content required access to real material, technical skills, or physical proximity. This imposed natural constraints on scale, speed, and dissemination. Traceability, although imperfect, was sometimes possible through digital footprints. With generative AI, these constraints collapse. Harmful content can now be produced instantly, without real-world referents and with minimal expertise. Identity, voice, and imagery can be convincingly synthesized. As a result, harmful practices become automated, scalable, and difficult to attribute. This shift transforms the logic of violence from effort-intensive production to low-cost, high-scale fabrication.
2. The erosion of evidentiary boundaries
A major consequence of this transformation is the weakening of epistemic certainty in digital environments. When synthetic media becomes indistinguishable from authentic content:
- visual evidence loses reliability,
- audio content loses evidentiary authority,
- digital interactions no longer guarantee authenticity.
This produces a structural paradox: the more realistic synthetic content becomes, the less stable our perception of reality becomes. For practitioners addressing TF-GBV, the challenge is no longer only verifying whether harm occurred, but also navigating situations where harm is plausible, simulated, or uncertain. This has direct implications for legal frameworks, psychological assessment, and institutional response systems.
A turning point in understanding TF-GBV: systemic violence and psychological uncertainty
Generative AI lowers the entry barrier for producing abusive content, enabling a broader range of actors to engage in harmful behavior. This leads to systemic transformations: an increase in potential perpetrators, diversification of abusive content, reduced visibility of origin and accountability, and faster circulation before detection or intervention. As a result, TF-GBV becomes more diffuse, less attributable, and increasingly embedded within socio-technical systems rather than linked to identifiable individuals. Importantly, AI does not create gender-based violence. However, it reshapes its operational conditions across three interrelated dimensions:
- Scale: increased volume, actors, and reach,
- Speed: near-instant generation and dissemination,
- Plausibility: blurred boundaries between real and synthetic content.
Within this transformation, a crucial psychological dimension emerges. Traditional TF-GBV frameworks focus on exposure, being targeted, surveilled, or having private content disseminated. AI introduces an additional layer in which uncertainty itself becomes a form of harm.
Individuals may experience:
- persistent doubt about the authenticity of content involving them,
- anxiety about potential fabrication of sexual or humiliating material,
- erosion of trust in digital representations and interactions,
- anticipatory fear of exposure that may not yet exist.
This shifts the psychological impact of TF-GBV from reactive harm (what has happened ?) to anticipatory harm (what could be fabricated and circulated ?). The burden extends beyond lived experience to include plausible simulation. Together, these dynamics mark a turning point in how TF-GBV is experienced, interpreted, and addressed. The phenomenon is no longer only about discrete harmful acts, but about systemically enabled conditions of possibility.
The main epistemic question thus shifts:
“What happened?”
to increasingly:
“What could be made to appear as if it happened?”
Beyond generative AI: bias, stereotypes, and epistemic distortion in discriminative AI
While generative AI plays a central role in enabling the large-scale production of synthetic harmful content, it is not the only form of AI contributing to the transformation of TF-GBV. Discriminative and predictive systems, particularly those trained on biased or unrepresentative data, also shape harmful dynamics in less visible but equally consequential ways. These systems are widely used in content moderation, recommendation engines, hiring tools, and risk assessment models. When trained on historical or skewed data, they can reproduce and amplify existing gender stereotypes. For example, recruitment algorithms may disproportionately associate leadership or technical roles with men, while ranking or filtering women differently in job recommendations. Similarly, content moderation systems may unevenly classify or deprioritize reports of harassment, leading to differential visibility and response to abuse. Recommendation systems can also amplify harmful or sexualized portrayals of women by optimizing for engagement, thereby reinforcing biased representations.
Beyond the reproduction of stereotypes, such systems can generate misleading or incorrect inferences that appear credible. For instance, an automated system may wrongly flag a victim as a perpetrator, misclassify consensual versus non-consensual content, or infer behavioral patterns that do not reflect reality. These outputs are not necessarily fabricated in the same way as generative content, but they still contribute to systemic misrepresentation. This dynamic intersects with the phenomenon of AI hallucinations. Hallucinations are typically defined as plausible but factually incorrect outputs generated without explicit intent. In contrast to maliciously crafted synthetic content (such as deepfakes), hallucinations are not inherently intentional. However, their impact can be significant: they introduce credible inaccuracies that further blur the boundary between truth and fabrication. Moreover, when combined with biased data or flawed model assumptions, these outputs can reinforce harmful narratives, including gendered stereotypes or false associations. Importantly, while hallucinations themselves are unintentional, their outputs can be selectively exploited or recontextualized by malicious actors, thereby becoming part of harmful dynamics. This creates a continuum between error, bias, and intentional misuse. As a result, the risks associated with TF-GBV do not stem solely from what AI systems can generate, but also from how they classify, rank, interpret, and prioritize information. Addressing these risks requires going beyond content generation to critically examine data quality, bias, model design, and the socio-technical contexts in which these systems operate.
Towards a collective reflection and actionable recommendations
Understanding the intersection between AI and TF-GBV requires moving beyond purely technical discourse toward deeper interdisciplinary engagement, including psychology and trauma studies, digital sociology, legal and regulatory frameworks, evidentiary standards, and platform governance with safety-by-design approaches. This also highlights a persistent and widening gap between technological development and legislative response. While AI systems evolve rapidly and continuously reshape modalities of harm, legal and policy frameworks often remain slower to adapt, creating regulatory blind spots and challenges in attribution, accountability, and enforcement.
This gap is not only temporal but structural: law is designed around stable, verifiable realities, whereas generative AI operates in synthetic, probabilistic, and rapidly shifting environments. Addressing this mismatch requires more than updating legislation. It calls for rethinking evidentiary standards, strengthening regulatory agility, and developing governance mechanisms capable of responding to synthetic realities. What is at stake is not only technological evolution, but the reconfiguration of trust, authenticity, accountability, and harm in digital ecosystems. In this context, the appropriate response is neither technological optimism nor pessimism, but structured, interdisciplinary, and policy-aware reflection on how societies can interpret and regulate environments where the boundary between real and synthetic is increasingly blurred.
Please sign in or register for FREE
If you are a registered user on Research Communities by Springer Nature, please sign in