Introduction
India is all set to host the AI Impact Summit 2026 from February 16 to February 20, 2026. This will be the fourth convening in the series of global AI summits and the first to be held in the Global South. The summit follows the AI Safety Summit hosted at Bletchley Park in 2023, the AI Seoul Summit in 2024, and the AI Action Summit hosted in Paris in 2025. Within this landscape, the AI Impact Summit 2026 is viewed by India as an opportunity to lead a consensus-oriented dialogue across divergent approaches to AI governance put forward by the European Union, the United States, and China, while presenting an alternative vision that foregrounds the priorities of the Global South.
India’s official vision for the Summit is centered on “Democratizing AI and Bridging the AI Divide” through the three foundational pillars, or sutras, of “People, Planet and Progress.” The key focus of the Summit is on enhancing AI access across the Global South while ensuring that AI acts as a catalyst for Global South leadership.
While the deliberations and outcomes of the Summit are being keenly observed, it is important to critically examine the state of AI discourse, deployment, and governance in India beyond the official rhetoric surrounding the Summit. Given the democratic backsliding under the Bharatiya Janata Party’s (BJP) ethnonationalist regime, it is important to assess how the rights of minorities and marginalized communities are being impacted by the expanding and unregulated deployment of AI systems.
The objective of this policy report is to provide a bird’s-eye overview of:
- Key deliberations and outcomes of the previous AI Summits;
- The discourse on AI governance and the regulatory ecosystem in India;
- The state of AI use in India and associated concerns regarding its impact on minority and marginalized groups through: (a) weaponization of generative AI for demonization and dehumanization of religious minorities, (b) the deployment of AI systems for state surveillance, and (c) harms emanating from discrimination and exclusion in access to public services; (d) risks from deployment of algorithmic systems in elections.
- Recommendations for states, industry, and civil society.
This policy report does not provide a comprehensive study of the wide range of AI governance challenges in India, but rather offers a concise background on the contemporary discourse in the run-up to the India AI Impact Summit, while highlighting key concerns raised by civil society. We hope this document can inform more in-depth policy discussions and future research.
Overview of AI Summits
In 2023, OpenAI released ChatGPT for public use, which quickly garnered one of the fastest-growing user bases and triggered what has been called an AI arms race. This fuelled hype around AI’s future capabilities and also mainstreamed doomsday concerns around artificial general intelligence. Public declarations from technologists warned against AI’s existential risks to humanity. Parallel to these developments, global policy concerns around AI governance increasingly invoked the language of “AI safety,” even as its meaning and scope remained deeply contested.
These rising concerns around “AI safety” brought together state leaders, Big Tech executives, and civil society in the United Kingdom for the first global AI summit in 2023. Several countries across the globe, including India, announced setting up AI Safety Institutes (AISIs) in the aftermath of the summit. Importantly, the Bletchley Declaration was signed by 28 countries, including the USA, China, India, and the European Union, during the AI summit organized by the United Kingdom.
The declaration focused on enhancing transparency obligations for private companies involved in developing frontier AI systems, alongside the development of appropriate evaluation metrics and tools for safety testing. To advance this agenda, the Seoul AI Summit followed in 2024, which saw several legacy and new companies, including Google, Meta, and OpenAI, adopting voluntary frontier AI safety commitments requiring signatories to publish “a safety framework focused on severe risks” at the AI Action Summit in France. However, the dominance of the speculative existential risk narrative in AI safety drew sharp criticism for shifting attention away from the need to regulate the current and real privacy, fairness, transparency, and ethical harms that AI systems pose to society.
The 2025 Paris AI Action Summit, co-chaired by France and India, marked a shift away from concerns around speculative catastrophic risks towards a wider agenda for an “open, multi-stakeholder and inclusive approach” to enable “human rights-based, human-centric, ethical, safe, secure and trustworthy” AI. The Summit also discussed the environmental impact of AI, the future of work, and launched an initiative for public interest AI.
The resultant Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet was signed by 58 countries, along with the EU and the African Union Commission. However, the Paris summit failed to build consensus among major powers on AI regulation, as the US and UK did not sign the declaration. While the Paris summit’s vision on human rights, sustainability, and open public-interest AI was a welcome step, the lack of consensus and absence of real measures for accountability and sustainability have prevented meaningful action. The Summit also failed to critically challenge the entrenchment of Big Tech power and the monopoly of a few companies in the AI lifecycle and value chain.
While the AI summits have emerged as important sites for multilateral deliberation on AI governance, they are not anchored within any international institutional framework. Their outcomes are typically joint declarations endorsed by participating states. This absence of institutional grounding has resulted in agendas that are shaped by prevailing geopolitical and economic considerations. This has also led to concerns that summits risk becoming arenas of industry lobbying dominated by Big Tech interests, shifting focus from regulation to voluntary standards. Indeed, globally, governments have been moving away from regulation towards models of self-governance, reasoning it is friendly to innovation. The UK renamed its AISI to AI Security Institute in 2025, and the US reorganized its AISI as the Center for AI Standards and Innovation (CAISI).
Discourse on AI Governance in India
India does not have a comprehensive, specialized AI regulatory framework comparable to the EU AI Act. The overall policy position largely favors self-regulation focused on promoting the responsible adoption of AI systems for socio-economic development. The Ministry of Electronics and Information Technology released the India AI Governance Guidelines in November 2025 in the run-up to the AI Summit. While not formally stated, the guidelines supersede prior AI policy efforts such as NITI Aayog’s “National Strategy for Artificial Intelligence,” sectoral frameworks by SEBI, TRAI, and CCI, or MeitY’s own AI initiatives.
These guidelines can be viewed as India’s primary framework for AI governance, favoring a hands-off approach from regulation to promote technical innovation. They outline seven ‘sutras,’ or guiding principles, of Trust, People First, Innovation over Restraint, Fairness & Equity, Accountability, Understandable by Design and Safety, Resilience and Sustainability applicable across all sectors.
These principles will shape India-specific risk frameworks, voluntary commitments, and standards for safe, responsible, and accountable AI. The guidelines do not envisage adopting specialized AI regulation, at least in the medium term, claiming that “a separate law to regulate AI is not needed given the current assessment of risks.” Instead, they recommend relying on existing regulatory frameworks to address harms from AI systems, supplemented by targeted amendments wherever necessary, while preserving innovation.
Universal access to AI infrastructural resources to facilitate innovation and adoption of AI systems is another key concern of India’s AI governance framework. The AI Governance Guidelines lay out a “techno-legal” approach to governance and recommend integrating Digital Public Infrastructure (DPI) with AI to achieve these ends. The Office of the Principal Scientific Advisor to the Government of India has also released whitepapers on “Democratising access to AI infrastructure” and “Strengthening AI Governance through Techno-legal Framework” to this end, in the months preceding the Summit.
Below is a summary of the prevalent discourse on AI governance in India in the run-up to the Summit:
AI for Socio-economic Development through Private Entrepreneurship
The India AI Governance Guidelines released in November 2025 view AI through the lens of a potential driver of economic growth and simultaneous enabler of inclusive development. To accomplish this, it aims to promote innovation through private entrepreneurship and to increase the adoption and diffusion of AI across sectors such as health, education, and agriculture. It must be noted that these sectors traditionally fall within the purview of the welfare state, and are being increasingly subjected to neoliberal policy changes and technocratic interventions. The “AI for social good” narrative must thus be viewed with caution, as it may lead to datafication and commodification of the poorest citizens while opening up welfare services to experimentation by the private sector.
Experts warn that projecting India as the “use case capital” of the world might neither result in real gains for people experiencing poverty, nor solve complex socio-economic developmental problems, but instead legitimize exploitative and extractive marketization of the poor as data sources, testing grounds, and subjects for private startups and global Big Tech.
For instance, a study analyzing the use of AI-enabled automated diagnostic models in Indian healthcare highlighted that on-the-ground deployment of such systems combines data collection for training with patient treatment, effectively denying underserved communities the right to access healthcare without being subject to algorithmic experimentation. Practitioners were found not to prioritize information-sharing with patients from rural and disadvantaged economic backgrounds, and any consent obtained was neither informed nor freely given. The study also highlights the dangers of allocating limited resources to developing “spectacular technologies” rather than prioritizing structural reforms to achieve universal healthcare outcomes.
Innovation without Adequate Accountability Mechanisms
The AI Governance Guidelines aim to promote innovation and adoption of AI systems while mitigating risks to society. However, this risk mitigation is envisioned primarily through voluntary measures, including industrial codes of practice, technical standards, self-certifications, and sector-specific guidelines. The Guidelines deliberately reject “compliance-heavy regulation” to promote responsible innovation at the nascent stage of the ecosystem.
It states, “all other things being equal, responsible innovation should be prioritised over cautionary restraint.” This view that posits regulation as a barrier to innovation in technology has been deeply contested over the years, and many scholars recommend transparency obligations and safety guardrails, at the very least for high-risk AI use cases.
The Guidelines also recommend relying on existing laws, such as the Information Technology Act 2000 and the Bharatiya Nyaya Sanhita 2023, to address harms from AI. It further proposes conducting a regulatory-gap analysis and amending existing legislations to address emerging AI harms in the medium term. However, it continues to highlight the importance of encouraging innovation as an important consideration in such amendments. While the Guidelines acknowledge the need for accountability mechanisms and clarification of liability regimes, it falls short of specifying any concrete recommendations for the same. Instead, it emphasizes that all accountability mechanisms must balance innovation.
The Guidelines further refer to “India’s unique social, economic, and cultural context” and the need to safeguard “vulnerable groups” from risks posed by AI systems. However, they do not discuss the specific harms faced by religious minorities, Dalit, Bahujan, Adivasi communities, and sexual and gender minorities. Annexure 4 outlines existing statutory laws that can address specific AI harms. For instance, it enumerates laws like the Rights of Persons with Disabilities Act 2016, Transgender Persons (Protection of Rights) Act 2019, Code on Wages 2019, and the Scheduled Castes and the Scheduled Tribes (Prevention of Atrocities) Act 1989 as applicable statutory regulations to address “discrimination in hiring decisions using AI recruitment tools.” However, in the absence of legally mandated transparency obligations for designers, developers, and deployers of AI systems, this approach places the onus on members of marginalized communities to gather evidence of discrimination and to challenge powerful AI systems in courts. In many cases, citizens may remain unaware that they are being subjected to profiling and algorithmic decision-making in recruitment processes.
Moreover, in the absence of a clear liability regime, citizens and courts will find it hard to affix responsibility. For example, in cases relating to discriminatory hiring decisions across different stakeholders in the AI value chain, courts will have to determine whether the deployers (or the hiring company) should be held responsible for failing to undertake adequate human oversight and due diligence, or whether the developers should be held responsible for failures in bias mitigation and possibly incomplete user manuals. Thus, reliance on existing regulation without imposing enforceable accountability obligations on AI systems becomes effectively meaningless in practice.
Furthermore, the Guidelines provide no recommendations for independent oversight of government or public-sector AI deployments for welfare disbursement or law enforcement. In fact, the Digital Personal Data Protection Act 2023 has weakened the Right to Information Act by imposing a blanket prohibition on the disclosure of “personal information,” which can enable state officials to deny critical information under the guise of privacy.
Techno-legal Approach to AI Governance
The AI Governance Guidelines propose a “techno-legal” approach in response to systemic harms from AI systems. The whitepaper on techno-legal framework defines the framework as “the integration of legal instruments, rule-based conditioning, regulatory oversight and technical enforcement mechanisms embedded with the technical architecture by design. This approach ensures that governance is not merely a set of external constraints (or post-facto rules) but an intrinsic feature of any AI system, adaptable to evolving risks and contexts.” In effect, the “techno-legal” approach advocates for a set of procedural and technical safeguards embedded throughout the AI lifecycle to prevent and mitigate potential harms from AI systems.
However, the whitepaper falls short of recommending any statutory obligations and leaves implementation to incentives, voluntary standards, and sectoral guidelines. As per the whitepaper, the goal of regulatory mechanisms should be to “provide guidance, hear grievances and pronounce a decision on complaints.” This, in some ways, contradicts the fundamental tenet of the techno-legal approach earlier claimed to achieve, i.e., ensuring ex-ante system-level accountability. It also restricts regulatory enforcement to grievance redressal and places the burden on impacted communities to challenge AI’s systemic harms.
Deploying technical solutions without regulatory oversight can not only be ineffective, but it can also lead to adverse outcomes for vulnerable communities. This is demonstrated in LibTech India’s study, which highlights the exclusion of workers from employment under the Mahatma Gandhi National Rural Employment Guarantee Act (MGNREGA) due to mandatory Aadhar-based authentication, which is India’s national biometric digital identity system that assigns a unique 12-digit identification number to residents based on their biometric and demographic data.
Democraticizing AI through Access to Infrastructure
AI development requires computing infrastructure, advanced semiconductors, quality datasets, and models. All of these resources are monopolized by a few companies located in the Global North. The whitepaper on “Democratising access” to AI infrastructure asserts India’s vision of treating compute power, data repositories, and model ecosystems as “shared resources so that innovators everywhere can participate in shaping the AI age.” This is envisioned through state-led investments in developing national capacity in AI infrastructure and governance frameworks that treat data, compute, and models as “digital public goods.”
In March 2024, India AI Mission was launched with a budget of over Rs 10,300 crores (1.25 billion USD) spread over five years. The mission focuses on improving access to computing, quality data, skilling, startup financing, and collaboration between the public and private sectors for AI innovation. For instance, the India AI Mission provides access to subsidized compute through a national GPU pool and some of the largest GPU subsidies have been allocated for sovereign foundational model development led by local startups.
However, the recent budget saw cuts to the India AI Mission, possibly pointing towards greater interest in attracting private investment to build infrastructure. This was also reflected in tax incentives for data centers, including a tax break until 2047 for foreign cloud providers using Indian data centers. India has emerged as one of the largest consumer markets outside the US for major AI companies and has consequently attracted significant investments for AI infrastructure.
This demonstrates what experts have noted as the inherent challenges in India’s goal of AI sovereignty and developing a national AI stack, while also being dependent on foreign investments by tech companies, especially when AI companies are packaging their investments in the form of “Sovereignty as a Service.” This raises questions about whether India’s vision to democratize access will challenge global monopolies and power concentration, or whether it will instead create new monopolies domestically while still relying on foreign investment.
It is also important to consider the environmental impact of expanding AI infrastructure, particularly the impact of the construction of data centers on local communities. Recently, data centers are facing pushback from local communities in the US. This is because operating data centers comes with huge energy requirements, a majority of which is likely to be met by fossil fuels. This not only strains local power grids but also contributes to increased greenhouse gas emissions and the air pollution crisis faced by major cities. Data centers additionally require vast amounts of water for cooling, which can threaten local water supplies in a country facing water stress, where access to safe drinking water remains unequal among social groups.
This is compounded by a lack of transparency around water usage by data centers. While the AI Governance Guidelines emphasize Safety, Resilience, and Sustainability, they do not provide concrete, actionable policy recommendations to assess and mitigate the environmental impact of AI. Similarly, the whitepaper on democratizing access to AI acknowledges resource-efficient development of AI as a challenge and suggests incentivizing data centers to adopt energy-efficient cooling systems and hybrid power sources.
However, a truly democratic vision should fundamentally rethink AI infrastructure expansion around questions of sustainability and actively engage with local communities and environmental experts to conduct environmental and social impact assessments before the construction of data centers. It must also demand more transparency from the private sector on energy and water sources for data centers, consumption data, and sustainability plans. In contrast, the past decade has seen a steady weakening of environmental regulation, including environmental impact assessments, which is often reduced to a bureaucratic exercise that fails to take into account the full scale of economic and environmental impact of proposed industrial projects.
Integrating AI into Digital Public Infrastructure
A core emphasis of India’s approach to AI governance is its focus on Digital Public Infrastructure (DPI), which includes the national digital identity Aadhar, the Unified Payments Interface (UPI), and the data exchange called Data Empowerment and Protection Architecture (DEPA). The AI Governance Guidelines recommend integrating AI into DPI for socio-economic development. The whitepapers on democratizing AI and the techno-legal approach to AI also mention DPI as a cornerstone to enable these respective goals. Although practical implementations of this integration are still nascent, proposed systems include the Open Cloud Compute initiative that will provide compute power through a network of micro data centers operating on common standards.
Although the conception of a public digital infrastructure that challenges Big Tech hegemony and provides democratic access to AI is promising, experts warn that the multiplicity of meanings ascribed to the broad term DPI in international conversations can obscure the differences between a state-dominated versus more decentralized community-driven models.
India’s DPI has been state-led and in the past raised concerns around privacy, state surveillance, and exclusion in welfare distribution. Without adequate safeguards, this raises risks of newer and more pervasive forms of surveillance. Researchers have also pointed out that the competitive effects of DPI need further examination, as it can also lead to monopolization in the market. It further does not always provide effective accountability mechanisms.
Regulation of Synthetic Content
Although the larger impetus appears to be towards a laissez-faire approach favoring self-regulation, there have been instances where the government appeared to favor a more “direct and interventionist” regulatory approach, mostly with respect to synthetically generated content. In October 2025, the Ministry of Electronics and Information Technology (MeitY) released a draft amendment to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, and opened it for public consultation.
Then, on February 10, 2026, MeitY notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, scheduled to take effect on February 20, the day the Summit ends. The amendments define a new category of synthetically generated content (SGI) and impose due diligence obligations for intermediaries that enable the creation or dissemination of SGI. They also mandate additional obligations for significant social media intermediaries (SSMIs) that enable the uploading and dissemination of such SGI.
The latest amendments have been introduced with the stated aim to address harms from deepfakes, misinformation, and other unlawful synthetically generated content that can infringe the privacy of citizens or undermine the national security and integrity of the nation. However, these amendments have raised concerns around both the efficacy of the proposed measures to address harms and the possibility of being misused to harass, intimidate, and retaliate against innocuous users, thereby creating significant risks to privacy and freedom of expression.
The draft 2025 amendments definition of SGI was broad and ambiguous, and could have included a large number of filtering/editing tools. It failed to distinguish harmful content from benign uses. Consequently, civil society warned that such a broad scope could have a chilling effect on legitimate speech, including artistic expression, political satire, and journalistic pieces.
The subsequent 2026 amendments narrow the scope of SGI to “audio, visual or audio-visual information which is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that such information appears to be real, authentic or true and depicts or portrays any individual or event in a manner that is, or is likely to be perceived as indistinguishable from a natural person or real-world event.”
It also excludes: (a) routine of good-faith editing and technical correction that does not misrepresent or change the meaning or context of the content; (b) routine or good-faith creation of educational or training materials and research outputs where such output “does not result in the creation or generation of any false document or false electronic record”; and (c) use of algorithms for “improving accessibility, clarity, quality, translation, description, searchability, or discoverability” that does not generate, alter or manipulate “any material part” of the underlying content. However, some concerns around its vagueness and the overbreadth of the definition still persist. Although the amendments exempt educational and research outputs, there is no explicit reference to exemption for journalistic, artistic, or satirical content.
The amendments impose due diligence obligations on intermediaries that allow the creation, modification, publication or dissemination of SGI. It mandates them to deploy “reasonable and appropriate technical measures” to not allow users to create unlawful content, including non-consensual intimate imagery, child sexual abuse material, false and deceptive portrayals of natural persons or real-world events. Platforms must also prominently label or provide audio disclosure for all lawful SGI and embed permanent metadata or provenance, to the extent technically feasible, including unique identifiers to identify the intermediary used to create such synthetic content. Platforms must not allow the modification or removal of these labels or metadata. The efficacy and technical feasibility of the labelling and provenance requirements remain disputed, and the provenance requirements under the law do not provide safeguards to protect user privacy and anonymity in benign uses of SGI, which could lead to self-censorship among marginalized communities.
The amendments also impose a three-hour timeline on intermediaries to disable access to unlawful SGI upon “actual knowledge,” i.e., through a court order or executive order. This short timeline risks incentivizing overremoval of content to avoid liability and can have serious implications for freedom of expression. Further, it imposes obligations on intermediaries to take expeditious and appropriate action even when they become aware of the creation or dissemination of unlawful SGI on their own accord or through grievance complaints. This may include immediate removal or blocking of content, suspension/termination of user accounts, and identification and disclosure of the identity of the violating user to the victim-complainant and/or the appropriate authority, wherever applicable. This raises concerns about potential misuse, especially since the sharing of user information with state authorities does not require a prior judicial order. This can not result in significant risks to user privacy and safety.
Additional obligations on SSMIs include obtaining user declarations, verification of these declarations by means of “reasonable and appropriate” technical measures, and displaying prominent labels or notices for content that is verified to be synthetically generated.
Overall, the extensive obligations may encourage proactive monitoring of content which may lead to collateral censorship as intermediaries will err on the side of caution to avoid liability. Furthermore, experts have questioned both the legal validity of expanding the definition of intermediaries in the IT Act to include Generative AI tools and the legitimacy of expanding the due diligence obligations for safe harbor to regulate SGI.
In the past, MeitY has issued advisories to intermediaries reiterating their obligation to remove synthetically generated content. One such advisory was issued in the aftermath of political uproar on Gemini AI’s response to a question of Prime Minister Narendra Modi being a fascist. Gemini’s response was characterised as a violation of India’s Intermediary guidelines by a union minister.
This was followed by a hasty initial advisory (on March 1, 2024) that mandated platforms to take the government’s explicit permission before deploying under-tested/unreliable AI models and label them with a disclaimer on “possible and inherent fallibility or unreliability of the output generated.” However, after pushback from industry, the advisory was withdrawn and a new advisory was subsequently issued, which reversed the “explicit permission” mandate.
The new advisory continues to mandate under-tested and unreliable AI models to be made available in India only after they are labelled to inform the users of the “possible inherent fallibility or unreliability of the output generated.” The advisory also asks intermediaries to ensure that the use of such models “does not permit any bias or discrimination or threaten the integrity of the electoral process.” The advisory has been criticized for lack of clarity in terms of both its scope and the ambiguity of terms like “undertested” and “unreliable.” Moreover, the legal validity of these advisories and their enforceability remains disputed.
AI-Enabled Targeted Hate and Discrimination
Generative AI and the Production of Harmful Content
Within the past decade, India has been witnessing an unprecedented divisive political discourse where hatred against Muslim and Christian minorities is not only normalized in the public sphere, but such hateful expressions are lauded and sanctioned by the ruling leadership in overt and covert ways. Social media and private messaging platforms have been replete with content portraying minorities as a threat to the Hindu nation-state, community, family, and morality. This content is frequently framed in terms of conspiracy theories such as “love jihad,” “land jihad,” “vote jihad,” and “population jihad,” as well as mobilization around issues like cow protection and temple-mosque disputes, which often spill into real-world violence.
In recent years, the growing accessibility of generative AI models producing text-to-image and text-to-video outputs has enabled a new wave of online hate facilitated by photorealistic images, videos, and caricatures that reinforce and reproduce harmful stereotypes. CSOH’s report on AI-generated Islamophobic content on social media highlighted the prevalence of images depicting Muslim men as violent, deviant, and criminal, engaging in violent acts of rioting in public life and incestuous sexual relationships in private life. The study also revealed the dangerous trend of dehumanizing and fetishizing Muslim women through sexualized imagery, often depicting them in intimate positions with visibly Hindu men. A Decode investigation similarly highlighted the existence of Facebook pages dedicated to AI-generated images sexualizing Muslim women, with a majority of these images created using Meta AI.
Furthermore, observers have noted that incidents of public tragedy, including terrorist attacks or railway accidents, are exploited to circulate viral AI-generated content that demonizes and vilifies the Muslim community, portraying them as antagonists to a suffering Hindu community. Soon after the November 2025 Delhi blast, in which at least fifteen people were killed, videos depicting Muslim doctors working in laboratories with explosives began to circulate on social media. In another instance, AltNews reported users sharing hyper-realistic AI-generated images of corpses in the aftermath of the Pahalgam attack in April 2025, with many images accompanied by anti-Muslim commentary. Similarly, railway accidents have sparked the “rail jihad” conspiracy theory with synthetically generated images and caricatures of Muslim men placing rocks on railway tracks.
Generative AI has also emerged as a convenient tool for the BJP to demonize, dehumanize, and incite violence against minorities. The ruling party’s weaponization of social media to spread Hindu nationalist propaganda and silence dissenters has been well-documented. Just a week before the India AI Impact Summit, BJP’s Assam unit uploaded an AI-generated video on its official X account, depicting the Chief Minister of Assam, Himanta Biswa Sarma, shooting at two visibly Muslim men with the title “No Mercy.” One of the individuals in the framed picture appeared to be a morphed photo of the opposition leader, Gaurav Gogoi, wearing a skullcap. The video has now been deleted after widespread criticism. However, this post was not an anomaly and is part of a broader pattern of using AI-generated content in divisive, polarizing electoral campaigns. Last September, the same Assam state unit of BJP shared an AI-generated video depicting visibly Muslim men and women in major landmarks across Assam in a brazen attempt to stoke fears of demographic change. The video claimed that Assam would become 90% Muslim if voters did not choose wisely and the ruling BJP lost the upcoming election. A petition to the Supreme Court noted that this video had been viewed over 4.6 million times.
The Assam and Delhi state units of the BJP have used official social media accounts to circulate Generative AI videos targeting opposition leaders like Mamata Banerjee, Chief Minister of West Bengal, and Gaurav Gogoi, opposition leader from Assam. It is worth noting that both West Bengal and Assam are slated for assembly elections in 2026, which may have contributed to the production of these videos. A common theme across several images and videos is the implication of a conspiratorial collusion between opposition leaders and visibly Muslim people, who are often depicted as “infiltrators” posing a threat to national security. In one video, shared by the Delhi BJP unit, visibly Muslim men, women, and even children are dehumanized as mosquitoes being chased away by the Election Commission of India’s controversial Special Intensive Revision (SIR) of electoral rolls.
These are not isolated instances. In a study of X posts on Assam BJP’s official account, AltNews found that nearly 40% of the posts target Muslim minorities. A significant proportion of these posts included synthetic AI-generated images and videos accompanied by communal slurs. Importantly, this type of generative AI imagery does not exist in a vacuum and it reflects, reinforces, and normalizes the very real tragic consequences of disenfranchisement, dehumanization, and deportation of some of the poorest and most vulnerable Muslim communities. In the aftermath of the Pahalgam terrorist attack, the Chattisgarh state unit of BJP shared a Ghibli-style animated picture of a mourning woman next to her deceased husband, accompanied by the caption “Dharam poocha, jaati nahi” (They targeted based on religion and not caste). The ruling party’s use of the viral Ghibli trend to invoke a message of religious division in times of tragedy drew intense criticism. Similarly, in the aftermath of state security forces killing Maoists in Chattisgarh, the BJP Karnataka official handle shared a synthetically generated image of Union Home Minister, Amit Shah, holding a cauliflower at the tombstone of Naxalism. The use of cauliflower imagery has been linked to genocidal calls against Muslim minorities, referencing the Logain massacre in the 1989 Bhagalpur riots, where hundreds of Muslims were brutally murdered, and their bodies buried under cauliflower saplings.
The underlying brutality produced in these images and videos is often contrasted with the mockingly humorous tones of accompanying messaging or the emotive background scores that memeify and normalize such extreme calls to violence.
The unchecked dissemination of harmful content must also be seen as a failure of social media and generative AI platforms in enforcing their terms of service and community guidelines. Generative AI tools lack adequate safety guardrails, especially in local languages and social contexts. An investigation revealed the lack of safety guardrails in popular text-to-image tools, with Meta AI, Microsoft Copilot, ChatGPT, and Adobe Firefly responding to harmful prompts and generating imagery reinforcing stereotypes and demonizing the Muslim community. Meanwhile, X’s AI assistant Grok has been used to create non-consensual nude and sexually explicit images of women.
An MIT investigation found rampant caste bias in OpenAI’s GPT-5. Researchers found that Sora generated stereotypical and exoticizing images of caste-oppressed Dalit communities. When prompted to depict “dalit jobs,” it produced images of dark-skinned men cleaning manholes or holding brooms and collecting garbage. Another study on covert harms in LLM-generated content found systemic bias in open-source LLMs. Most models studied generated more harmful speech in caste-based conversations as compared to race-based conversations. Similarly, a study on stable diffusion found depictions of Dalits as impoverished individuals performing manual labour, or as a group of protesters.
Deployment of AI Systems for State Surveillance
Recently, Devendra Fadnavis, the Chief Minister (CM) of Maharashtra, the second most populous state in the country, announced the development of an AI tool in collaboration with the Indian Institute of Technology Bombay (IIT Bombay) to detect alleged Bangladeshi immigrants and Rohingya refugees across the state. The said tool is reported to use language-based verification to analyze “speech patterns, tone and linguistic usage” to assist law enforcement in the initial screening of suspected illegal immigrants. As per the CM’s statement, the tool had reached 60% accuracy and would be rolled out in a few months with 100% accuracy.
But linguistic experts doubt the possibility of building an AI tool to distinguish nationalities, given the shared culture and history of Bengal and the resultant overlap of Bengali dialects spoken in India and Bangladesh. It is thus extremely likely that this tool could become another instrument to discriminate against the highly persecuted Bengali-speaking Muslim community and low-income migrant workers from Assam and West Bengal. This comes in the backdrop of the forcible deportations of thousands of Bengali-speaking Muslim citizens of India to Bangladesh on suspicion of being illegal immigrants, without due legal process. India has also drawn condemnation for the inhumane deportation of Rohingya refugees who fled a genocide in Myanmar. This is accompanied by the ubiquitous demonization of Bengali-speaking Muslim working class laborers who have migrated to several metropolitan areas in search of work and now regularly face demolitions, detentions, police brutality, and harassment from Hindu nationalist vigilantes in BJP-ruled states.
Another growing aspect of AI usage by law enforcement agencies is predictive policing using “AI models [to] analyze crime patterns, high-risk areas, and criminal behaviour, enabling law enforcement to take proactive measures.” Law enforcement agencies across the country appear to be in a race to adopt what is being called a proactive/predictive policing model instead of a traditional reactive policing approach.
Recently, the state of Andhra Pradesh has launched the “AI4AP Police” pilot across three districts. Rourkela Police in Odisha announced the launch of Project SHIELD (Smart Habitual-offender Intelligence & Early Law-enforcement Detection), which includes a habitual offender database and suspect predictor algorithm. Maharashtra has likewise created a special-purpose vehicle for AI policing called MARVEL (Maharashtra Research and Vigilance for Enhanced Law Enforcement) and recently launched an AI-enabled cybercrime tool called MahaCrime OS in collaboration with Microsoft.
These developments arise in the backdrop of multiple international studies that have shown the ineffectiveness and inherent opacity of such algorithms, which can use race, ethnicity, and religion as determining variables for criminality due to biases in historical data. This is especially relevant given the Indian criminal justice system’s disturbing history of entrenched casteism and identity-based notions of criminality, reflected across police records, which are now part of the Crime and Criminal Tracking Network & Systems National Database (CCTNS). The Vimukta communities, who were once notified as criminal tribes by the colonial administration, continue to face police harassment and surveillance under the administrative label of habitual offenders in several states. Notably, on multiple occasions, Indian police have been accused of collusion with rioters against Muslim, Sikh, and Christian minorities during sectarian strife.
Delhi has been using the Crime Mapping Analytics and Predictive System (CMAPS) that relies on satellite imagery, CCTNS data, and real-time information from police hotlines to identify and predict crime hotspots for almost a decade. An ethnographic study conducted between 2017-2019 demonstrated that data inputs to the CAMPS system reflect historical biases based on caste, religion, gender, and class, resulting in overpolicing of areas inhabited by vulnerable groups. The resultant feedback loop reinforces biases of police officers and institutionalizes and legitimizes discrimination as data-driven scientific policing. However, there exists no independent oversight and accountability mechanisms to monitor the effectiveness and fairness of these systems.
Facial recognition technology (FRT) is also being increasingly deployed by law enforcement throughout the country for a wide range of functions from crowd control to criminal investigations, raising concerns around mass surveillance in the absence of regulatory oversight. Law enforcement acquiring FRT to “tackle terror and criminal activities” in Jammu and Kashmir’s Kishtwar has raised concerns around the accuracy of such systems and their potential in amplifying bias in policing. Reportedly, the Jammu and Kashmir police have deployed facial recognition systems to flag suspected overground workers of militants.
India is home to some of the most surveilled cities in the world. Hyderabad stands out as one of the most heavily surveilled, with a dense network of CCTV cameras, and a command and control center equipped with live CCTV feeds and FRT systems. Reports of Hyderabad police photographing citizens in public spaces without consent or due process to match these images against centralized criminal databases have drawn criticism. Bengaluru has also created a vast network of advanced AI-powered CCTV cameras, equipped with real-time monitoring and FRT under its Safe City project.
Recently, Lucknow deployed over a thousand AI-enabled cameras that will generate real-time alerts to law enforcement upon detecting “subtle signs of distress – a wave for help or unusual gestures.” This system is deployed with the expressed objective of preventing harassment of women and other vulnerable groups. This dystopian surveillance system could not only generate false alerts, but also lead to disproportionate invasion of citizen privacy. Experts warn this surveillance network could be used to discriminate against Muslim minorities and target interfaith couples in a region that is witnessing increased state and vigilante violence. The Delhi police similarly announced a plan to install 10,000 AI-enabled cameras powered with FRT and distress detection under the Safe City Project.
Delhi Police’s use of Automated Facial Recognition System (AFRS), which was originally procured to aid the search for missing children, was brought to light during the 2019 anti-Citizenship Amendment Act protests, where an Indian Express investigation revealed the existence of multiple photo datasets, including “habitual protesters” and “rowdy elements” for criminal investigations and monitoring of sensitive public events. FRT was also used to identify suspects in the deadly 2020 North-East Delhi riots, where the police’s ineffective investigation has drawn criticism.
Being subjected to indiscriminate mass surveillance in public spaces violates the constitutional right to privacy, as well as hampers citizens’ right to assemble and protest. Use of FRT in law enforcement can risk automating and amplifying existing biases in law enforcement and lead to the wrongful targeting of minorities and marginalized communities. In India, FRT is deployed in a complete legal and regulatory vacuum, without judicial pre-authorization or independent oversight. The lack of transparency in the procurement and use of FRT systems further means that there is little public information about their accuracy; available limited data shows the prevalence of high error rates that can have a significant impact on the lives of those wrongfully identified in a country where criminal cases take years, if not decades, and undertrials languish in prisons.
Across the world, civil society and policymakers have recognized the need to regulate and limit the use of FRT. The EU AI Act has banned AI practices that it categorizes as “unacceptable risk,” including the prohibition of “real-time remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement,” unless it’s necessary for specific, limited objectives including targeted search of victims of abduction and trafficking, and with safeguards including prior authorization from judicial or independent administrative authorities. Even retrospective FRT for law enforcement is classified as “high-risk” AI systems and is subject to risk assessments, transparency obligations, and independent authorization. Similarly, several states in the US have passed legislation that strictly limits the use of FRT by law enforcement.
AI in Welfare Delivery and Exclusionary Impacts
Recent years have witnessed increasing integration of algorithmic systems in the public sector and distribution of welfare services to citizens. This includes biometric identification through Aadhar authentication to access welfare benefits and subsidies. While the government has often publicized the efficiency and cost savings from reducing subsidy leakages, on-the-ground reports over the years continue to reveal exclusion of some of the most vulnerable populations. The Right to Food campaign documented starvation deaths in Jharkhand, Uttar Pradesh, and Odisha, linked to denial of food rations due to failure of Aadhar-based authentication. Notwithstanding the Supreme Court’s judgement that Aadhar cannot be made compulsory for school admissions, many schools continue to insist on Aadhar cards, resulting in children from poor, migrant, and Adivasi communities being denied their right to education.
Despite concerns about exclusion in the last decade, the deployment of AI systems to authenticate citizens’ identities in welfare delivery has continued to rise. Recently, the Ministry of Women and Child Development made facial recognition through the POSHAN app mandatory for accessing take-home rations under the Integrated Child Development Service Scheme (ICDS) from July 2025. The take-home rations under the ICDS provide nutritional support to some of the most vulnerable pregnant and lactating mothers, infants, and adolescent girls. This has raised concerns around exclusion, and the All India Federation of Anganwadi Workers and Helpers (AIFAWH), a union of workers tasked with last-mile distribution of these rations, has demanded an immediate rollback of the mandate, citing it as a violation of the National Food Security Act.
Several worker unions have also approached the Bombay High Court, challenging the order and outlining the practical difficulties and the excessive nature of the mandate. Overworked anganwadi (rural child care center) workers have expressed frustration and anger at the rigidity of the system and the disproportionate, excessive verification they have to conduct before distributing a single packet of ration.
The onboarding to the facial recognition system requires authentication through a one-time password (OTP) to the mobile number linked to the beneficiary’s Aadhar. As per anganwadi workers using the system, both the verification through OTP linked to Aadhar and the facial scan present challenges due to technical glitches in the app, low accuracy of facial recognition systems (especially in poor lighting), and network connectivity issues. Further, many women, especially in rural India, do not have access to a personal phone, and the mobile numbers linked to their Aadhar may belong to male relatives or be outdated.
This system is likely to cause widespread exclusions of marginalized pregnant and lactating women and infant children who are in the most need of these rations. The government, however, could reframe these aggregate exclusion statistics as a success story in weeding out corruption, enabled by the absence of any transparency and accountability mechanisms.
Apart from authentication, algorithmic systems are being deployed to determine and verify citizens’ eligibility for welfare or public services, and to de-duplicate or remove false beneficiaries. These systems operate in complete opacity, and often the affected citizens are unaware of their existence. Several state governments have been building massive family databases, collating information on citizens across government departments, to create a “single source of truth.” These databases contain personal demographic and socio-economic information, including community details, family relationships, land records, income, education, health, etc. These raise concerns around privacy and surveillance, especially given the broad exemptions for state collection and processing of personal data under India’s Data Protection Law.
These databases, built with the express purpose of delivering good governance, create significant risks of exclusion due to errors or biases that are harder to trace, challenge, and rectify. An investigative report disclosed how errors in Telangana’s Samagra Vedika’s led to the denial of subsidized food rations for those below the poverty line. Similarly, errors in Haryana’s Parivar Pehchan Patra database led to the denial of old-age pensions and widow pensions to beneficiaries who were either mistakenly declared dead or erroneously marked ineligible. The state’s deployment of opaque algorithmic systems without public consultation in the absence of effective grievance redressal mechanisms unfairly places the burden of proving their right to access public goods on citizens.
Deployment of Algorithmic Systems in Elections
Recently, opaque algorithmic systems are being increasingly deployed in elections. This can impact the right to vote of citizens, especially those belonging to marginalized communities. For instance, in 2025, the State Election Commission of Bihar rolled out an e-voting application for municipal elections, without any regulatory framework or transparency on how the voter data will be collected, processed, or stored. The application also used facial recognition to verify the identity of the voters, raising serious concerns about privacy, in a state that has low digital literacy. Without adequate safeguards, such an application can not only undermine the secrecy of voting but also lead to fraudulent voting and hamper the sanctity of elections.
Earlier, the National Informatics Centre Service Incorporated (NICSI) had floated a tender for empanelment of private agencies for “surveilling and monitoring” of voters using invasive FRT during the Lok Sabha General elections in 2024. However, later the tender was cancelled on the directions of the Election Commission of India, stating privacy concerns.
Reports have revealed the deployment of opaque algorithmic systems in the controversial Special Intensive Revision (SIR) of electoral rolls, being undertaken by the Election Commission of India (ECI), whose bipartisanship is increasingly under question. Officials from the ECI have recently admitted to digitization and translation errors from the Electoral Registration Officer Network Voters (ERONET) software contributing to “logical discrepancy” notices being sent to voters in West Bengal.
Earlier, several voters had reported receiving unwarranted notices to produce evidence for inclusion in the state electoral rolls, possibly due to technical errors in data transformation leading to discrepancies in names. However, an independent investigation found that ECI introduced algorithmic mapping software midway through the voter list revision exercise without any instruction manuals or standard operating procedures (SOPs) on record, and without providing any public information to citizens.
While there is no public information on the functioning of the mapping software, interviews with block-level officers revealed that the mapping software flags suspected voters, which it calls “logical discrepancies.” These are flagged when the information provided by the voters does not match the 2002-2004 electoral roll, or when it encounters an unacceptable level of age difference between a voter and his/her claimed parents in the 2002-2004 electoral roll. The opacity on the deployment of the software and the underlying logic used to flag suspected voters can exacerbate the risks of disenfranchisement in an already controversial revision exercise, which places the burden of proving the right to vote on citizens.
Recommendations
Recommendations for States
- Global discussion on AI governance must go beyond voluntary commitments from tech companies and urgently recognize rights-respecting, robust legal regulations to address harms arising from the design, development, and deployment of AI systems, with clear obligations for all stakeholders across the AI value chain. States must deliberate liability regimes, anti-trust laws, and mandatory transparency obligations for AI systems.
- States must draft regulations and policies through meaningful and transparent consultations that include civil society, especially those representing minority and marginalized communities.
- State regulations must affirm commitment to international human rights obligations codified in international covenants, including the International Covenant on Civil and Political Rights (ICCPR) and the International Covenant on Economic, Social, and Cultural Rights (ICESCR), and the UN Guiding Principles on Business and Human Rights (UNGPs).
- Any deployment of AI systems to assist or automate decision-making for public-service delivery and in high-risk use cases that impact access to education, housing, employment, credit, etc., must be done after consultation with local communities and must be subjected to human oversight, transparency disclosures, periodic risk assessments, including fundamental rights impact assessments, third-party audits, and regular monitoring. Local communities’ rights to demand explanations, seek human reassessment, grievance redressal, and recall of algorithmic systems must be recognized and protected.
- All procurement, development, and deployment of AI systems by state authorities, public sector enterprises, or law enforcement agencies must be transparent and subject to independent oversight, risk assessments, and robust monitoring. Transparency needs to be proactively followed by incorporating standard terms in public tendering processes. Further, the enforcement of the Right to Information Act for all AI deployments in the public sector must be strengthened.
- Prohibit the use of predictive policing and the use of biometric and facial recognition systems for mass surveillance.
- Review the existing Digital Personal Data Protection Act, 2023, and the rules made under it to expressly provide for safeguards to personal data and uphold the right to privacy of citizens, including from state collection and processing, which must also be subject to principles of data minimization, purpose limitation, and storage limitation.
- Mandate meaningful Environmental and Social Impact Assessments, with local community participation, before establishing data centers.
- Implement a robust framework for whistleblower protection and legal protections for researchers.
- Fund independent public-interest research and longitudinal studies on ethical and responsible AI.
Recommendations for Industry
- Disclose information on the environmental impact of AI systems, including carbon emissions, energy use, and water consumption by data centers powering model training and inference.
- Transparency in the data used for model training, including disclosure of data sources, dataset representativeness, and details on annotation methods.
- Transparency on the objectives, limitations, and risks of AI systems, as well as public disclosure of testing, evaluation, and risk assessments undertaken by the AI systems.
- Disclose information about data annotation teams, including the training, support, and compensation provided to them.
- Open to independent third-party audits and risk assessments.
- Establish clear mechanisms or protocols for human oversight and post-deployment monitoring.
- Establish robust incident reporting protocols.
- Meaningful participatory design and development of AI systems through collaboration with stakeholders, especially end-users and impacted users from marginalized communities throughout the AI lifecycle.
- Establish systems for user feedback and reporting mechanisms for harmful outputs.
- Integrate diversity in teams across the AI lifecycle, including members of marginalized communities in important decision-making, design, development, testing, and monitoring roles.
Recommendations for Generative AI Content
- It is important to legally mandate greater transparency from Generative AI companies on their terms of service, enforcement mechanisms for violative content, safety filters, and other guardrails. They must also disclose information on the effectiveness of safety guardrails in different languages, categorized by different forms of harmful content, and across different regional and cultural contexts. Generative AI platforms should release periodic transparency reports on statistics of harmful content encountered, the enforcement mechanism, and safety mitigations implemented, and the effectiveness of enforcements and mitigations.
- Generative AI systems must undergo independent third-party audits and publicly disclose the findings and recommendations. They must also publicly release follow-up reports on action taken on such recommendations.
- Liability for harmful speech by Generative AI models must be carefully and openly deliberated. For instance, developing best practices to mitigate harmful content, with safe-harbour protections for AI companies being contingent upon compliance with these best practices.
- For social media platforms, existing terms of policy and applicable laws prohibiting harmful and illegal speech including, hate speech, Non-consensual Intimate Image Abuse (NCII) content, and Child Sexual Abuse Material (CSAM), apply to synthetically generated harmful content. It is important to mandate greater transparency to assess the fairness and effectiveness of content moderation on social media platforms through mechanisms like detailed transparency reporting, disclosure of information on the efficacy of automated content moderation systems in different languages, information on the support and training provided to human moderators in different languages, and researcher access to platform data.
- Research has repeatedly highlighted the challenges surrounding content moderation in context-heavy speech for low-resource languages. This also creates barriers to effective action against hateful synthetic AI content. It is vital to improve existing content moderation systems. Social media platforms and Generative AI systems must ensure diversity in their content moderation teams, provide training and assistance to content moderators, and collaborate with independent fact-checkers, especially from Global South countries.
- While there is benefit in user awareness and transparency through labelling, watermarking, and other data provenance requirements, it is important to understand the technical limitations of these measures, which can be bypassed by bad actors. Even content authenticity initiatives like Coalition for Content Provenance and Authenticity (C2PA), while promising, are dependent on widespread adoption in order to be effective. Policymakers and platforms must also ensure that privacy and anonymity are not compromised by watermarking or data provenance requirements, as these can disproportionately impact the rights of marginalized communities in accessing online spaces. Further, compulsory labelling requirements without considerations of user awareness and without defining rational minimum thresholds for labelling risk inundating online content with labels that become effectively meaningless.
Recommendations for Civil Society
- Question the hype surrounding AI, the techno-solutionism and deregulation narratives promoted by states and corporations.
- Raise awareness of the fairness, transparency, and privacy risks of AI systems, and support impacted communities in understanding the possible harms they pose.
- Document cases of harm from the deployment of AI systems in the public sector and law enforcement.
- Funders must support independent public-interest research that critically examines the design, implementation, and impact of AI systems.
- Demand accountability from Big Tech and AI companies, and state authorities on the design and deployment of AI products.
- Build alternative sustainable community-owned models of AI that prioritize public interest over private profit.
- Build channels for interdisciplinary dialogue, including computer scientists, lawyers, social science researchers, journalists, and AI ethicists.
- Build Global South coalitions and alliances for meaningful participation in international fora on AI governance.
- Critically examine the power asymmetries within civil society that marginalize grassroots organizations and vulnerable groups in technology policy discussions.