AI-Enabled Hate and Repression as India Hosts AI Impact Summit 2026

New Delhi, India (February 11, 2026): As India prepares to host the AI Impact Summit 2026, the first global AI summit in the Global South, the Center for the Study of Organized Hate (CSOH) and the Internet Freedom Foundation (IFF) today released a policy report examining the stark disconnect between India’s official AI rhetoric and the ground reality of AI-enabled hate, discrimination, surveillance, repression and violence against minority and marginalized communities that is occuring in an overall enviroment of democratic backsliding. 

The report, “India AI Impact Summit 2026: AI Governance at the Edge of Democratic Backsliding,” documents how generative AI tools are being weaponized, including by Prime Minister Narendra Modi’s Bharatiya Janata Party (BJP), to demonize and dehumanize religious minorities, while opaque AI systems deployed by the state are enabling mass surveillance, exclusion from essential services, and deletion of voters from electoral rolls. 

The policy report arrives as global leaders, technology executives, and civil society representatives gather in India for a summit officially centered on “Democratizing AI and Bridging the AI Divide” through the pillars of “People, Planet and Progress.” However, the report reveals a troubling pattern of AI deployment that undermines democratic rights and targets vulnerable communities.

The report demonstrates how systematically the ruling government and its proxies have used AI-generated content on official social media accounts to spread divisive and dehumanizing anti-minority content, including videos depicting violence against visibly Muslim men and stoking fears of demographic change. Just one week before the Summit, the BJP’s Assam unit uploaded an AI-generated video showing the state’s Chief Minister shooting at two visibly Muslim men with the title “No Mercy.”  Beyond online hate, the law enforcement agencies across multiple states are deploying facial recognition technology, predictive policing algorithms, and AI-powered surveillance systems without independent oversight, judicial authorization, or transparency. 

Most alarmingly,  Davendra Fadnavis, the BJP Chief Minister of Maharashtra state, recently announced the development of an AI tool in collaboration with the Indian Institute of Technology Bombay to detect alleged Bangladeshi immigrants and Rohingya refugees through language-based verification analyzing speech patterns, tone, and linguistic usage. Experts have criticized building such a tool to distinguish nationalities, warning it will become another instrument to discriminate against the already persecuted Bengali-speaking Muslim community and low-income migrant workers from Assam and West Bengal. This comes amid reports of forcible deportations of Bengali-speaking Muslim citizens to Bangladesh on suspicion of being illegal immigrants, without due legal process.

At the heart of these concerns lies India’s governance approach itself. The AI Governance Guidelines released in November 2025 favor voluntary self-regulation over binding accountability mechanisms, explicitly stating that “”all other things being equal, responsible innovation should be prioritised over cautionary restraint.” While the guidelines acknowledge the need to safeguard vulnerable groups, they fail to address specific harms faced by religious minorities, Dalit, Bahujan, Adivasi communities, and sexual and gender minorities. The approach places the impossible burden of gathering evidence and challenging powerful AI systems on the very communities most harmed by them, without mandating the transparency obligations that would make such challenges feasible.

The organizations call on global leaders attending the Summit to demand and commit to rights-respecting regulation with clear obligations across the AI value chain, prohibit the use of AI for mass surveillance and predictive policing, require transparency and independent oversight for all public-sector AI deployments, and center the voices of affected communities in AI governance discussions. The states must move beyond voluntary commitments from tech companies and urgently recognize the need for robust regulation to address harms arising from the design, development, and deployment of AI systems.

The full policy report is available for download here.

Share the Post: