How Algorithmic Design of Tech Platforms Normalizes Gendered Violence

Photo: Mamun Sheikh via Shutterstock
AI-generated sexual deepfakes are industrializing gender-based violence at scale, exposing how platform design and profit incentives enable abuse.

In late 2025, Grok, the generative AI chatbot on X (formerly Twitter), became a subject of scrutiny after researchers found it producing millions of sexualized images over a short period, including images of real people, digitally undressing or placing them into sexually suggestive contexts. 

Analysis estimates that the tool generated around three million sexualized images in just 11 days, including roughly 23,000 appearing to depict children, prompting investigations by the European Union and national authorities under digital safety laws. The case illustrates how rapidly advancing AI systems are industrialising non-consensual intimate image (NCII) based violence at scale, without accountability. It reflects a broader global escalation, as reports find that between 2019 and 2023, deepfake videos increased by 550%, with 98% constituting pornography and 99% targeting women, while as of 2024 globally, 57% of women report experiencing image-based abuse.

This pattern of technology-facilitated gender based violence (TFGBV) extends far beyond a single tool or platform. Globally, AI-enabled “nudify” and deepfake generators are being shared in messaging channels and app stores for free to create and circulate non-consensual images of women that can lead to real-world harms, including reputational damage, impact on careers, social isolation, familial and intimate partner violence, and threats to personal safety, which in many cases could be life-threatening. 

A 2026 review of the Google Play Store and Apple App Store identified at least 55 and 47 ‘nudify’ apps, respectively, despite company policies prohibiting apps that sexualize, undress, objectify, or degrade people, even if framed as entertainment. While these policies place responsibility on developers, there is little meaningful scrutiny or accountability for the platforms themselves, which continue to host and distribute such apps without restriction.

The Economy of Violence

The pattern of outputs from GenAI apps, including Grok, that enable the generation of harmful content indicates that the violence these tools perpetrate results from the intersection of corporate greed, flawed model training, engagement incentives, and deliberately under-resourced safety design. Grok’s defaults, for instance, permitted users to create highly suggestive and non-consensual images with simple prompts like “undress her”, and only after intense public outcry and regulatory pressure did the platform limit those features. The volume and speed at which the sexualized images were generated indicate that not only is the technology being trained to produce it without needing specialized skills, but also that there is a lucrative market for such content that Big Tech – known for profiting off of hate – cannot let go.

An illustration of how tech companies have historically prioritized the circulation of harmful content is seen in a 2016 internal Facebook research that found that “64% of all extremist group joins are due to our recommendation tools,” indicating that the platform did not merely host hateful content, but actively facilitated its spread through algorithms that continuously pulled more users into extremist and polarized ecosystems. Facebook reportedly ignored the findings. Similarly, in 2020, Facebook’s India policy head declined to act against hateful content posted by members of the ruling right-wing party, reportedly out of concern that enforcement could harm the company’s business interests and market position in the country. In 2025, it was also reported that hate speech increased by 50% in the months after Elon Musk took over Twitter in October 2022, despite Musk’s pledge to control the spread.

Big Tech companies themselves have publicly acknowledged the significance of engagement in their algorithms. Companies like Meta, X, and Alphabet have consistently reported that increases in engagement correlate with advertising revenue growth, highlighting that user attention is effectively monetized at scale. For example, Meta’s earnings report quantifies this dynamic by linking increases in daily active users to year-on-year revenue growth, establishing how algorithmic prioritization of engaging content directly translates into financial performance. In July 2025, Zuckerberg attributed the increase in the time users spent on Facebook and Instagram to AI.

Research and reporting consistently demonstrate that algorithms prioritize content likely to drive prolonged attention, interaction, and sharing, even when that content is polarizing or hateful. Meta’s Ranking and Content policy states that it uses “thousands of different signals to make predictions about whether you’ll find something more or less valuable.” As Zuckerberg famously said in the US Senate when asked about the company’s sources of revenue, “Senator, we run ads,” remarking on the importance of advertising revenue to platform decision-making. 

Systems optimized for engagement tend to elevate material that provokes outrage, because psychological triggers like anger, fear, and moral indignation increase time spent on the platform, clicks, comments, and reactions, all of which translate directly into higher ad impressions. A 2025 briefing paper submitted to the UK Parliament in response to an inquiry into “The Hidden Forces and Harms of the Digital Advertising Ecosystem” notes that content that is “outrageous, misleading, and otherwise harmful” increases engagement, which platforms leverage to maximize advertising income. The paper highlights that these algorithmic incentives “ultimately lead to the widespread dissemination of misinformation, disinformation, conspiracy theories and coercion,” all in service of revenue growth.

It is worth noting that hate speech generates substantial traffic not only for the accounts producing it, but also for the platforms that host and amplify it. The monetization of engagement, including a revenue-sharing model for verified accounts on X, further shows that incentives lie in maximising attention and profit, rather than in curbing violence. 

Engagement metrics are proxy variables for attention, and attention is the commodity sold to advertisers. Platforms package user attention as inventory – more minutes, more clicks, more frequent return visits bring more ad impressions and higher revenue, fuelling the platform economy.

The Gendered Cost

The emergence and subsequent commercialisation of GenAI technologies must be understood in relation to how sexualized content has long been used as a tool of control over women’s bodies and autonomy. Sharing of non-consensual intimate images (NCII), even long before AI, has been a gendered form of harassment that caused reputational harm and social stigma on women and gender minorities. As one study notes, deepfakes and other non-consensual AI-generated sexualized media can undermine psychological integrity and personal autonomy, violating rights to privacy, dignity, and sexual agency. The report states, “image-based sexual abuse catalysed deepfake technology and remains its most common misuse.”

AI-generated NCII sits within a long-standing, systemic pattern of violence and control that is already familiar to those who have experienced gender-based abuse, whether online or offline.

In India, journalist Rana Ayyub has repeatedly been targeted by online violence, including sexualized deepfakes and doxxing campaigns that triggered waves of harassment and sustained abuse across digital platforms. In Pakistan, prominent women journalists have long been subjected to this form of violence. A recent example includes a highly realistic AI-generated video of journalist Benazir Shah circulating on X. The scale, intensity, and frequency of digital attacks against women journalists and human rights activists in the South Asian region have been rising for years, which was highlighted in 2020 when Pakistani journalists issued a joint statement urging the government to intervene against the sustained online violence undermining their safety, credibility, and ability to work.

Deepfake and GenAI technologies reproduce and intensify existing societal biases and patterns of sexual objectification. Trained on web-scraped data, these tools systemically sexualize women and girls, detach their personhood from social and emotional context, and render their bodies and malleable objects for consumption. Studies show that AI models are more likely to generate sexualized outputs when prompted with female subjects than male ones. But this is more than just replication of harm, as emerging AI systems expand the scale, speed, and social impact of TFGBV, creating a digital environment in which the bodies of women, gender-diverse people, and other marginalized groups become infrastructures for abuse. Weaponized to humiliate, shame, and silence, this violence reflects tactics very familiar in intimate partner violence, while intersecting with cultural norms, regulatory gaps, and the commercial incentives of platforms that prioritize engagement over safety.

Before Grok emerged as a mainstream example, gender rights experts described deepfake technology as a new tool to amplify violence, significantly affecting women and girls through objectification, silencing, and reinforcing patriarchal control over bodies and sexuality. The effects extend beyond individual violation with sexual deepfakes operating within cultural logics that already devalue women’s autonomy, normalize control over their representations, and put the onus of violence onto survivors rather than perpetrators and platforms.

Across all forms of content proliferating gender-based violence, a common thread that connects them is that they are engineered to provoke extreme emotional responses and to unsettle or weaponize deeply held belief systems. Political polarization is driven by deliberately destabilizing a group’s political identities and loyalties; religious extremism gains traction by exploiting and inflaming theological fault lines; and gendered violence is mobilized by promoting deeply rooted familiar hierarchies of patriarchy and sexism that already form social life. In each case, harm is produced by tapping into belief systems that platforms know will generate outrage, allegiance, and sustained engagement.

So, if amplification is embedded in infrastructure, then reform must also be focused on and target infrastructure. This means moving beyond reactive content moderation and towards binding obligations on design, recommendation systems, monetization models, and risk assessments. Legislators should require independent algorithmic audits, enforce transparency around engagement-based ranking systems, prohibit amplification and monetization of harmful content, and enforce liability where platforms profit from systemic gendered violence.

In addition, the AI-enabled gender based violence demands governance shaped by those most affected. Feminist technologists, Global Majority researchers, and gender-diverse communities must be intentionally involved in product design, risk modelling, and decisions around deployment of tools and technologies. Mandatory pre-deployment human rights and gender impact assessments for GenAI systems could help prevent many such harms. It must be acknowledged that corporations cannot continue to treat gendered violence as an unintended side effect when it has been repeatedly predicted.

On a regulatory level, stronger data protection laws that restrict biometric scraping, a prohibition on training models on non-consensual sexual content, watermarking and traceability standards for synthetic media, and enforceable removal obligations for AI-generated NCII would directly reduce the ease with which gendered violence is proliferated.

Finally, the solution may be less technical and more political. The intersection of capitalism, GenAI, organized hate, and gender-based violence reflects a much deeper issue that could be addressed through public interest technology funding, stronger competition law interventions to reduce platform monopolies, and international standards on AI weaponization and gendered harm. These strategies are necessary to prevent violence from becoming a profitable growth strategy. It is imperative that solutions centre on the systems of power, incentives, and governance that shape how technology companies operate – not just what they build, but why they build it and who benefits from it.

(Hija Kamran is a policy advocate specializing in technology and human rights in the Global South.)

Share the Post: