Social Media Trends Fuel AI Surveillance, Militarisation, and the Future of Targeted Hate

Photo: Boy Anthony via Shutterstock

Before Instagram turned into the curated feed of AI-generated content we see today, people were playing with altered versions of their own faces on TikTok. Its age-progression filter, which showed people’s aging process in real time on-screen, quickly spread across platforms. 

Zainab*, a marketing professional in Pakistan, remembers trying it with her mother a few years ago. On her screen, she watched her face morph into an older woman who resembled her grandmother. Beside her, her mother’s face softened into wrinkles she had never imagined seeing so suddenly. “No one prepares you to witness your parents grow old,” she says. “It was emotional, and also honestly unsettling as I looked at myself. I kept wondering how TikTok already knew what I’d look like in the future, and how it predicted a face that I have grown up around. It felt eerily familiar.”

Social media has always been a space where people participate in trends, conversations, and moments that promise visibility, and where the pursuit of likes, shares, comments, and reach has shaped how people express themselves. In this online attention economy, creativity is often driven not by genuine authenticity but by the fear of invisibility that forces the need to engage, to “connect with communities,” which comes at a cost far greater than most users realise. As the American artist Richard Serra famously remarked, “If something is free, you are the product.” This phrase continues to haunt the digital age, forcing us to ask why so many of the platforms that appear to connect the world beyond borders and distances are free to use in the first place.

What does it mean when being a “product” is not just a metaphor? It means that every click, scroll, and upload becomes part of a vast marketplace of human behaviour. Big Tech companies profit not by charging users, but by monetising their attention and data, essentially transforming personal information into advertising currency that corporations profit from. Every interaction helps refine algorithms that predict what we buy, who we like, what we fear, and even how we vote. The more granular the data, the more valuable it becomes for the corporation.

Big Tech monetises attention by extracting behavioural patterns, preferences, and biometric data extracted through photos, feeding them into algorithms that refine the predictive machinery of surveillance capitalism. Over time, this infrastructure of “free” interaction becomes a training manual for building more invasive technologies using the images users upload to feel seen or validated, which teach machines to see us better than we see ourselves. Zainab’s experience with the aging filter on TikTok exemplifies this – she saw herself becoming someone she didn’t anticipate. When applied to generative AI, this dynamic no longer stops at fun family activities, advertising, content curation, or even data annotation; instead, it expands into training systems that can accurately predict and track human identities over time, transforming connection into control.

The logic of profit through data extraction extends seamlessly into viral social media trends that disguise surveillance as play. Take the infamous #10YearChallenge or #MeAt20 trends that flooded Twitter timelines around 2019, urging people to flaunt their decade-long “glow-up” with photographic evidence. What seems like a harmless exercise in nostalgia could be a mass effort to submit data voluntarily. Unlike the late-night family activity of flipping through old photo albums to revisit memories, these digital challenges are collective performances driven by a desire to belong to a virtual community, participate in social events, and be seen in attempts to fit in. 

More recently, with the commercialisation of generative AI, social media platforms are saturated with AI-generated imagery that blurs the boundaries between real memory and manipulation. Google’s Gemini appears to be leading this commercial wave, driving trends that turn people’s wedding photos into Ghibli-style art, or the latest update, Nano Banana, enables users to generate Polaroid-like images of themselves with their younger selves, a celebrity crush, or even a deceased relative. These trends masquerade as acts of creativity or remembrance, but are really engineered nostalgia traps that draw users in under the guise of emotional connection while harvesting intimate biometric data. Each interaction further distorts our perception of reality, allowing AI systems to perfect that illusion with every new image they generate.

Fun-For-Data to Militarise Identity

The price of participation, though invisible, is far from free. Each image uploaded, for example, to generate a Polaroid image of oneself with their younger self, feeds the vast data pipelines that train AI to recognise patterns in human appearance over time, i.e., from childhood to adulthood – information that now sits in the hands of anyone willing to pay for access. Research into face-age estimation models shows that large datasets of age-annotated or age-progressed face images are already used to train models to recognise how individuals age across ethnicities and are helpful in “intelligent surveillance” and other industries. However, these studies also acknowledge that ageing is far from a uniform process, as a wide range of factors, including race, health, diet, culture, and environment, shape how individuals age. At the same time, they note a critical gap in datasets that capture ageing as a diverse, intersectional process across different populations.

Strategically and from an economic standpoint, social media trends such as the Polaroid-style images or the earlier #10YearChallenge appear to be an ideal mechanism for encouraging people to a) contribute publicly available data that can help fill the research gaps around how people age under different conditions, and b) enable corporations to scrape that data without consequence in order to train AI models that are being used far beyond simple text or image queries. The applications of such datasets and resulting trained AI models are equally vast – many venture into policing, including implementation at borders and in law enforcement for profiling in the name of ‘predictive threat assessment’, as well as in militarised technologies to surveil and target individuals across different lifecycle stages.

What begins as a cultural moment of connection through images ultimately reinforces the infrastructure of surveillance capitalism and ties directly into the militarised applications of predictive identity technologies. For instance, research has explored the wide variety of potential applications of AI in defence settings – from autonomous drones to target identification – stressing the need for policymaking informed by public attitudes to ensure responsible governance. A practical example of this application has been documented in Palestine, where the Israeli Defense Forces used AI-powered systems like Lavender, Where’s Daddy, and others to prepare a “kill list” and target Palestinians. Investigators found that operators sometimes approved a strike in as little as 20 seconds. These technologies, in part, have been enabled by Big Tech companies like Google that grant the Israel Defense Forces (IDF) access to Google Photos’ facial recognition data to instrumentalise the so-called kill list against Palestinians.

In this context, social media trends that generate vast quantities of biometric data (including age-progression) become more than cultural quirks and turn into training material for models that can reidentify, predict, and track individuals over time. As biometric surveillance migrates from policing on borders and in cities to warfare, the implications for communities already subject to profiling, especially in the global South, become all-the-more sinister.

Authoritarian Future of AI-enabled Weapons

In Pakistan, the “playful” veneer of social media trends, as Zainab experienced on TikTok, and as many are partaking in the form of Polaroid images generated through Google’s Nano Banana, takes on a different tone when set against a backdrop of authoritarian control, shrinking civic space, and institutionalised gendered violence. As millions of Pakistanis upload images of themselves in camera filters or in the form of polaroid-style manufactured memories, they do so in a country where human rights defenders, journalists, activists, marginalised minorities, and women are already subject to systematic surveillance, harassment, and forced disappearances. The widespread participation in viral image-driven trends cannot be separated from the state’s broader data ambitions geared towards a governance structure equipped with deep-packet inspection firewalls, military grade phone and internet tapping systems, and little in the way of independent legal safeguards.

From a gendered and marginalisation perspective in Pakistan, the stakes are especially high as surveillance technologies that trace digital footprints, facial features, and online behaviours perpetuate existing regimes of social control – from so-called honour-based violence to targeted tracking of female journalists and human rights defenders, to gendered targeting of Baloch, Afghan, or Pashtun men. In this context, the seemingly innocuous act of participating in an AI trend or uploading a decade-old photograph is not just self-expression; rather it becomes data production for systems that may map how a human ages, how their face changes, and how they might be identified across time. This is occurring in a country where the right to privacy is weak and digital oversight is minimal. Voluntarily sharing age-progressed images can easily feed into emerging predictive tools, with real-world consequences for surveillance, profiling, and social control.

It is worth noting that while the exact ways in which age-progression and facial-prediction technologies might be materialised in Pakistan against civilians remain speculative, the broader field of artificial intelligence in defence settings is already moving at an accelerated speed and points to almost boundless applications that can very well be perfected on civilians before they are deployed in the battlefield and vice versa. It is within the realm of possibility that AI-enabled military grade surveillance technology which identifies individuals accurately over time as they age, could be weaponised in a country where the focus of policymakers and law enforcement authorities is to control citizens rather than protect them, in which the state has already spent millions of dollars on military grade surveillance technology to use against civilians in the name of ‘national security’.

To establish a more concrete image of how this might play out in Pakistan, consider the ongoing state crackdown against Afghan refugees and their families. Many Afghans who sought refuge in Pakistan during the wars of the early 2000s have lived in the country for decades, contributed to the economy, and built lives in the country. However, in recent years, the Pakistani government has accelerated deportations and detentions of Afghans, including those holding long-standing documentation, under programmes of what authorities term “illegal foreigner” repatriation. If an age-progression informed AI tool were in operation, children of these families today, whose photos may already exist in a government database or online through innocuous social media filters, could find themselves forever linked to a categorised profile of “future target” due to an algorithm that has been trained to recognise how they will look years from now, depriving them of safe return, or of a homeland, or even of anonymity.

To localise the example further, in the province of Balochistan, enforced disappearances and state violence against the Baloch community are well documented. The families of men who have been forcibly disappeared live under the shadow of surveillance and are constantly treated as suspects or outsiders in their own country. In this environment, once the male children of detained Baloch men grow up, they risk targeted repression based on their familial and ethnic affiliations. Add to this a future where AI-driven facial prediction knows how they might age or evolve visually, and it directs to a mechanism that enables long-term tracking and state repression, with an impact that spans generations.

In another instance, the use of this technology becomes especially worrying when applied to journalists, human rights activists, and dissidents in Pakistan, whose communications and movements are already under heavy scrutiny, and they live under constant threat of being labelled ‘anti-state’. The precision of predictive technology enabled by AI in the near future means there will be no safe space for individuals challenging power in the country. We are not simply talking about being seen online, but also about being predicted, mapped, and ultimately deprived of safe spaces once offered by anonymity or invisibility online and offline.

At this point, it no longer matters whether someone participated in a specific social media trend contributing to training datasets for AI models. The reality is that once sufficient data has been gathered, machine learning tools can predict human­-ageing patterns with little additional input and with high accuracy. In other words, the saturated pool of voluntary image contributions may prompt a shift from data collection to data exploitation, where algorithms become self-sustaining, only requiring minimal new input to generate far-reaching predictions.

These visual trends acquire a sharper political meaning when we consider how Pakistan’s surveillance architecture is wired for repression. The export of monitoring systems such as the Lawful Intercept Management System (LIMS) and the Web Monitoring System (WMS 2.0) has enabled authorities to scrutinise mobile phones, internet traffic, and social media content at scale, with minimal legal recourse or transparency. When AI systems trained on vast public imagery have the potential to reconstruct or predict an individual’s face years later, we must ask, who will be targeted, how, and on what basis? In Pakistan’s fraught landscape of ethnicity-based profiling, religious extremist narratives, and state-military surveillance, the linkage between voluntary image sharing and involuntary identification leans towards a potential tool in the automation of hate, marginalisation, and control.

Big Tech companies are swiftly becoming the next generation of defence contractors, and AI is central to their business model. What appears innocuous today in the form of aging filters, nostalgia trends, or image sharing will strengthen existing systems of oppression for tomorrow. In a global environment where civil liberties are under threat from the convergence of state and corporate power, every voluntary online activity becomes a potential entry into a defence dataset, with very real consequences for people already living on the margins.

This is not to take the joy out of online activities, but to recognise and be aware of what our participation enables. It is a crucial reminder that we need to collectively demand and take control of our data and its privacy, and push for robust regulation that centres the rights, safety, and interests of users rather than the political or financial interests of the powerful.

(Hija Kamran is a policy advocate specializing in technology and human rights in the Global South.)

Share the Post: