Artificial Intelligence and the Escalation of Political Manipulation in South Asia

Photo: Somnath Mahata via Shutterstock

In an age when videos can be fabricated and broadcast to millions within seconds, the central question is no longer whether political messages are real, but whether anyone can reliably tell the difference. The advent of artificial intelligence has brought long-standing fears about the future of online information into sharp focus. Analysts warn of deepfakes that could destabilize elections and erode trust in media, contributing to proliferation of disinformation and other online harms. Yet the actual impact of synthetic media, i.e., content that is digitally created or manipulated using technologies such as Generative Adversarial Networks (GAN), Variational Encoders (VAE), diffusion models, etc., on these existing challenges remains largely speculative.

In South Asia, where high internet penetration intersects with low media and digital literacy, AI does not introduce new forms of political manipulation so much as it intensifies familiar patterns of digital propaganda. The ways political parties in India, Pakistan, and Bangladesh deploy these tools reveal a shift in both scale and speed. The key question is therefore not whether AI will revolutionize political communication, but how it amplifies established strategies and exacerbates existing harms across the contemporary South Asian landscape.

AI in Political Communication

Political parties have long embraced new technologies to reach voters and mobilize supporters. In India, social media has been central to the ascendancy of Prime Minister Narendra Modi and the Bharatiya Janata Party (BJP), enabling them to dominate the political landscape as digital platforms expanded the reach and experimentation of political communication. In Pakistan, Imran Khan’s rise, popularity, and mobilization of young voters have likewise been closely tied to the Pakistan Tehreek-e-Insaf’s (PTI) sophisticated use of social media.

The BJP, in particular, has been widely accused of weaponizing misinformation as a deliberate political strategy. Scholars, journalists, and fact-checking organisations have documented how the party’s extensive ‘IT Cell’ amplifies misleading narratives, doctored images, and fabricated stories to shape public opinion and discredit opponents. Generative AI has simply supercharged these existing tactics.

For instance, during Maharashtra’s 2024 provincial elections, BJP spokesperson Sambit Patra shared AI-generated voice clips of Nationalist Congress Party–Sharadchandra Pawar (NCP-SP) leader Supriya Sule allegedly discussing misappropriation of funds. Deepfakes were also widely used during the 2024 general elections, with an estimated 75% of Indians being exposed to deepfake content. A BoomLive report found that at least eight chatbots in the GPT Store focused on Indian elections, violating OpenAI’s policies, demonstrating that AI-generated disinformation now extends well beyond manipulated audio and video.

More recently, the BJP’s Assam unit shared videos on X ahead of the Bodoland Territorial Council elections depicting a dystopian future in which Muslims seize public land, convert landmarks into Islamic sites, and sell beef for consumption. The same account had previously circulated similar AI-generated content, including an image of Congress leader Gaurav Gogoi wearing a skullcap commonly associated with and used to stereotypically portray Muslim identity.

In neighbouring Bangladesh, Jamaat-e-Islami has used AI-generated videos to falsely show that its supporters are more diverse (including Hindu minority members) than they actually are. The party  claims such videos are “supporter-led” campaigns without official sanction, a convenient deflection. The accessibility of generative AI means campaigns that once required central body coordination can now be launched at regional or local levels.

Despite these cases of harms, generative AI also has the capability for genuine positive outcomes. In Pakistan’s 2024 general elections, the Imran Khan-led PTI used generative AI to share the party’s message in Khan’s AI voice after he was imprisoned in August 2023 and therefore inaccessible. This was seen as a part of the party’s broader strategy to use technology to level a relatively uneven playing field.

The Slopaganda Threat

While mis- and disinformation constitutes a major harm emerging from AI use, another threat to the media and information ecosystem remains less studied: AI slop. “The term refers to large volumes of low-quality, mass-produced content generated by artificial intelligence; text, images, and videos that clutter people’s social media timelines and make it harder to find accurate and trustworthy information. It also undermines journalism and art by devaluing original work and flooding digital spaces with automated content.

Such ‘slop’ can be weaponized further by far-right and extremist actors as a means of propagating their views through social media newsfeeds, often called ‘slopaganda.’ Unlike conventional propaganda, which depends on credibility, authority, or emotional persuasion, slopaganda functions through overproduction and saturation. It is particularly aimed at exhausting critical engagement with the media and verifiable sources of information, given the limits of human attention, to influence viewpoints or ideologies. Such imagery is endlessly reproducible, thus social media users have little choice but to encounter it. Their propagandizing effect lies in, as we see in the subsequent examples, how such widespread and pervasive content leads to the normalization of dehumanizing tropes and their resulting impact on users.

A report to the UK’s House of Commons Home Affairs Committee shows AI-generated images are increasingly used to propagate anti-Muslim and anti-migrant narratives online, depicting Muslim men as violent or dirty and portraying white people as victims, thereby reinforcing racist stereotypes. Such imagery has also become central to anti-immigrant discourse in Germany. In the United States, Trump’s Make America Great Again movement has embraced AI-generated imagery, a medium that aligns seamlessly with its political aesthetics.

Studies show that generative AI tools are producing and propagating hate against Muslims in India. The discursive hate includes images promoting anti-Muslim conspiracy theories, dehumanizing content, and hypersexualized imagery of Muslim women. The objective of these images was to add a visual dimension to existing Islamophobic discourse. The spread of such imagery corresponds to the increase in spam on social media platforms; spammy text blocks themselves have been used to spread unrelated content for years.

The trend of ‘Bikini interviews’, where AI-generated videos of women in skimpy outfits being interviewed and sexually harassed in the streets of India and the UK, is demonstrative. The videos, many of which are in Hindi, show the male interviewees targeting the female interviewers with misogynistic punchlines, sexualized remarks, and even physically grabbing them. These videos rack up millions of views and were even monetized to promote adult chat apps to “make new female friends.”

The emergence of “slopaganda” underscores how generative AI does not introduce a new logic of persuasion but amplifies the existing ecosystem of political communication. The synthetic turn in political communication represents continuity, not rupture where older strategies are being deployed under new technological conditions.

Resisting Algorithmic Populism in the Age of AI

The contemporary era of engagement-driven algorithms has ushered in the politics of “algorithmic populism,” in which digital platforms and their algorithms amplify populist communication by privileging emotionally charged, polarizing, and identity-based content that maximizes engagement. This dynamic is uniquely suited to boost extremist disinformation and hate, while granting it the trappings of legitimate popular support. Social media platforms have long been accused of herding users into echo chambers (often referred to as filter bubbles), increasing polarization, and making democratic consensus difficult. AI-generated content supercharges these existing dynamics.

While South Asian countries are not uniquely susceptible to this threat of algorithmic populism, the discourse around digitization and technology has made criticism more contentious. The rapid digitization of South Asia has been driven by an urge to “catch up” with the developed world and project modernity through connectivity, often sidelining critical reflection on how technology reshapes public discourse, social hierarchies, and political participation. Big Tech has exploited this discourse, presenting itself as a neutral intermediary while determining what is visible, amplified, or erased. The discussions surrounding AI unfold within the same framework.

The lack of a critical lens towards technology is further compounded by the fact that access to digital services has sometimes preceded more fundamental services like access to banking and financial services. For instance, under India’s flagship UPI payments system, Muslim shopkeepers, food delivery workers, or cab drivers have faced violence when their names are revealed during payment. Yet, there has rarely been a reflexive discussion on how such technologies can increase the vulnerability of minoritized populations.

Social media platforms have come to occupy a crucial space in civil society discourse with citizens often taking to it to express a wide range of opinions from political dissent, to social commentary, to consumer complaints. Nonetheless, political discussions in South Asia have avoided addressing social media platforms as businesses, and their fundamental societal impact.

Algorithmic populism emerges not out of innate characteristics of algorithmic newsfeeds, but also the political culture that surrounds it. This has been weaponized most spectacularly by authoritarian regimes, which exploit recommendation algorithms for social media engagement. The use of Coordinated Inauthentic Behaviour (CIB) demonstrates precisely how political operatives, at various levels, can now leverage these technologies to create sophisticated online campaigns, not through the organic outcome of user choices, as Big Tech claims, but the result of deliberate platform design decisions. Generative AI expands algorithmic populism’s scope, enabling it to emulate genuine mass sentiment with a more granular replication of user tone, emotion, and context. For example, CIB is automated with bots that use large language models (LLMs) to spread misinformation. X, which is already plagued by a verified AI bot problem, has been used by Russian bot farms to impersonate American users with the help of LLMs.

This is situated within an overall context in which existing mainstream discourse on technologies in South Asia normalize harms that often affect minoritized communities as an acceptable trade off for ‘progress’. Populist, fascist, or authoritarian governments have used this normalization to further their own politics by presenting emerging technologies as an extension of their own totalizing logic. The emergence of generative AI has entrenched these relationships further by justifying normalized harms, and framing such technology as free from human bias. 

Mitigating AI Generated Harms

Pre-existing harms have long defined South Asia’s digital ecosystem. Organized disinformation networks, platform-driven polarization, and a broken media ecosystem form the foundation on which AI amplifies newer challenges. Responding to these crises requires rethinking what we expect from digitization and its acceptable tradeoffs.

Importantly users must be given the right to exercise greater autonomy over their data, the content shared with them on their timelines, and how interacting with such may impact its presentation to others. Increasingly, copyright legislation has helped authors and artists preserve their own work from being used without consent for training AI models. The use of existing legislation, if wielded properly, can be rather effective, especially when combined with targeted action.

First, regulators must develop context-specific frameworks for transparency in political advertising, mandating the disclosure of AI-generated content and stronger penalties for synthetic disinformation (e.g., the EU AI Act imposes such requirements). They can require AI model developers to engage with external researchers and publish regular risk assessments. Social media companies must also be required to make their own assessments of the presence of synthetic media on their platforms, allowing users to make the active and accessible choice of whether or not they want to see such content on their feeds.

Second, AI developers (and Big Tech in general) need to be treated as businesses with monopolistic tendencies. This requires breaking up platform monopolies in areas like online advertising, as well as ensuring that user data is not compromised by obtuse data sharing relationships with third-parties by Big Tech companies. In the European Union, the combination of the Digital Services Act (DSA) and Digital Markets Act (DMA) have attempted to do this by constraining the market power of so-called gatekeeper platforms and enforcing transparency and accountability, including in algorithmic decision-making. South Asian countries should explore, both individually and at a regional level, ways of regulating market access for Big Tech.

Third, civil society and academic institutions must advance media and digital literacy programs that move beyond fact-checking or informing them about the uses of AI tools. This means equipping people to critically interpret recommendation systems, recognize manipulative engagement tactics, and understand the political economy of digital platforms.

Share the Post: