Artificial intelligence-driven facial recognition technology (FRT) has always carried the risk of being weaponized by state institutions, including local and national security forces. With greater data consolidation capabilities, AI synergizes with extended dataveillance to give those institutions better tools to upgrade their surveillance tactics—and potentially target the rights of minorities and political opponents. Dataveillance is a type of surveillance that relies on sorting through collections of data to target and identify, monitor, track, regulate, and predict activities of specific individuals and groups.
India has an emerging base of around 750-800 million active internet users and provides a compelling case study to analyze AI misuse. As non-western democracy, India provides an opportunity to critique the existing and proposed frameworks to regulate the state use of surveillance and AI technology for public safety and beyond.
The deterioration of human rights protections in India and the government’s targeting of its critics, minorities, and other vulnerable sections creates significant risk for technology misuse. Existing state and military deployment of FRT in policing points to a larger threat in impending expansion of harmful use of AI systems.
Academic frameworks proposed to assess the threats and regulate AI systems often fall short when applied to complex situations where data collection and transparency are murky at best.
Existing AI Misuse and Public safety
In recent years, the Indian government has invested significantly in developing AI applications to enhance military defense and public safety measures. The Indian military aims to spend nearly $50 million annually to develop and deploy AI tools. As of 2024, the army has already deployed 140 AI-based surveillance systems across various sectors at the borders. At an AI in Defense symposium, Defense Minister Rajnath Singh launched 75 AI technologies with newly developed products in robotics and intelligence surveillance.
Following this enthusiasm, the government has entered into partnerships with the US and Israel to develop AI-driven technologies. The specifics related to their applications within defense are not clear.
The government’s pattern of deploying AI surveillance locally, while operating under the guise of public safety, is increasingly evident. During the Citizenship Amendment Act (CAA) protests in 2019, protestors were encouraged to wear masks or paint their faces out of fear of being recorded and added to a facial recognition database.
The Automated Facial Recognition System (AFRS), which was initiated in 2018 to identify missing children, uses footage of the protestors to identify “rabble rousers.” Later, in December 2019, the Indian Express reported that the system was used at a political event in Delhi featuring Narendra Modi. The attendees were recorded as they entered the venue, and the live feed was used in the AFRS to flag potential protestors who could raise banners.
Furthermore, Indian urban spaces are notoriously well-surveilled, with Delhi and Chennai being among the most surveilled cities in the world ahead of several Chinese cities. In Andhra Pradesh, a State government-sponsored surveillance system integrated with Aadhar data to “improve efficiency of government schemes” was put into effect as far back as 2018. This program, implemented without public notice or debate, includes a voluntary option for people to have video cameras installed inside their homes to catch thieves in action.
The implications on privacy rights aside, the voluntary nature of this option can be easily manipulated to surveil vulnerable people who may not have the power or privilege to refuse the police. As of late 2024, this system still remains a reality, though still voluntary, as a part of the Real-Time Governance Centre.
On the topic of vulnerable people and regions, FRT has been regularly used in Kashmir to scan crowds for persons of interest. With 300 AI-enabled cameras in Srinagar, and a newly installed FRT system along the Jammu-Srinagar National Highway to track miscreants and “preempt attacks,” the surveillance state is well-established in the public space.
In the digital space, journalists and locals are regularly harassed for social media posts via algorithm tracking. In the 2024 Indian Lok Sabha elections, independent candidate Hafiz Sikander was the first to have filed his nomination papers with a GPS tracker attached to his ankle. Tracking the misuses of existing digital technology is paramount—these technologies are a few steps away from being AI-powered and, thus, more efficiently deployed.
AI and Social Media
AI misuse that stems directly from the state machinery is just one of the two approaches in which state-sponsored misuse affects people. There are also instances of party-supported agencies using automated scripts to generate and spread misinformation and military propaganda on social media. As opposed to policing, this approach influences public opinion by valorizing the Indian military, spreading Islamophobia, and nurturing public acceptance towards more stringent policing policies.
In 2024, a network of 1,500 fake, AI-driven social media accounts spread identical pro-government, pro-Army content—often promoting propaganda-like narratives.
Around the same time during the 2024 Lok Sabha elections, while India’s Election Commission issued an advisory warning parties against using AI-generated deepfakes, the government maintained a generally hands-off approach to regulating the AI landscape, including its possible misuse in elections. During general elections, political parties from across the spectrum used readily available, low-cost AI tools to create manipulated videos, significantly complicating efforts to control misinformation.
In an effort to test the bias of social media platforms like Meta hosting such ads, India Civil Watch International (ICWI) and Ekō, a corporate accountability organization, submitted 14 AI-generated political ads containing Hindu nationalist language and anti-Muslim hate speech. Meta approved the adverts. More reports later came out on the widespread prevalence of automated bots and deepfakes during election season in India.
The experiment exposed the ease at which content generation through AI and lack of regulation on social media platforms work together to promote hate speech online. Working in tandem, these applications have set the scene for larger and more complex misuse of AI in India. The AI and digital landscape remains largely unregulated — except when targeting dissent— which enables its misuse in the elections and in public and online spaces alike.
Lack of Regulation
The majority of western democracies have neutral regulatory bodies to support the privacy and related rights of its citizens. Proposed AI regulatory frameworks assume either independent or oversight bodies, willingness for state self-regulation, or enforceable user privacy rights and user autonomy.
India either lacks independent regulatory bodies altogether or has agencies without genuine authority. Under similar privacy and data protection laws, the state and military have been granted exceptional exemptions from data privacy acts, Right to Information (RTI) requests, and policing enforcement. With extensive dataveillance and technological capacities, the risk of abuse of power is even worse now.
Moreover, the present guidelines do little to prevent a government from using intelligent systems against its own citizens while being “legal” under national laws.
Diplomatic pressure from the UN Office of the High Commissioner for Human Rights (OHCHR) repeatedly highlighting human rights abuses in Kashmir has had little effect. Similarly, despite being a signatory to the International Covenant on Civil and Political Rights (ICCPR) and operating under the UN resolution on the Right to Privacy in the Digital Age, the conduct at home remains unfazed.
Such frameworks, while providing a solid starting point, cannot effectively apply to the governments that have demonstrable intent to misuse AI for political gains. Similar to the recent and still-developing understanding of data privacy, a concerted effort towards creating public awareness is needed not just for AI safety, but digital ethics and digital governance practices— education that can both tackle state-spread misinformation and create an urgency for the state to adopt best practices.
(Param Raval is an AI professional based in Montréal and a graduate from Mila – Quebec Artificial Intelligence Institute. He writes about safety and ethics in AI and human rights in India.)