Report

X’s Community Notes and the South Asian Misinformation Crisis

This report examines the performance of X’s Community Notes feature in South Asia, with a focus on adoption, linguistic representation, and impact in the region.
Table of Contents

Introduction

Community Notes (formerly known as Birdwatch on X, previously Twitter) is a crowdsourced fact-checking feature that allows users to add contextual notes to posts that may be misleading. The initiative aims to promote transparency and collective moderation. While Community Notes has shown promise in combating misinformation in some English-language contexts, its effectiveness varies greatly across regions and languages, particularly in areas marked by sociopolitical fragmentation and limited content moderation infrastructure.

South Asia represents one such high-risk region. It faces a convergence of factors that make it especially vulnerable to misinformation and disinformation. These include high levels of political polarisation, deep-rooted religious and ethnic divisions, inconsistent digital literacy, and a massive user base generating content in numerous local languages. 

The World Economic Forum’s (WEF) “Global Risks Report 2024” identifies mis- and disinformation as the most severe global risk over a two-year horizon, with particularly acute consequences in regions where trust in institutions is low and media ecosystems are fragmented. Despite this warning, Community Notes remains poorly localised for South Asia, with limited language coverage, low participation from regional contributors, and opaque rules for note visibility that tend to privilege English content and U.S.-centric engagement patterns. 

Findings from the Center for the Study of Organized Hate (CSOH) further underscore the consequences of these design failures. In our report, “Inside the Misinformation and Disinformation War Between India and Pakistan,” we traced how political influencers and state-affiliated accounts in both countries used coordinated disinformation campaigns to stoke nationalism and delegitimize opposing voices. 

This report examines the performance of Community Notes in South Asia by analyzing its adoption, linguistic representation, and impact in the region. A dataset of Community Notes written in South Asian languages (with English translations and annotations indicating political or potentially harmful tones) was leveraged, alongside a global breakdown of note language distribution, to identify key shortcomings and opportunities. 

The analysis covers the underrepresentation of South Asian languages in Community Notes, India-specific disparities between note volume and user population, temporal spikes around major events, qualitative biases in note content, and the implications for platform integrity.

Finally, concrete recommendations are proposed for X and Meta (which is piloting a similar approach) to improve Community Notes’ coverage and neutrality in South Asia. As the WEF “Global Risks Report 2024” makes clear, failure to address these challenges both undermines the effectiveness of features like Community Notes while also threatening broader societal cohesion, democratic norms, and public trust in an increasingly volatile information landscape.

How Community Notes Work

Community Notes is X’s crowdsourced method of adding context to posts that might mislead or confuse readers. Any long-standing, phone-verified account can volunteer to be a contributor, but newcomers begin as “raters.” They first earn credibility by judging whether existing notes are useful. Only when they have built enough “rating impact” are they allowed to draft notes of their own. This apprenticeship process is meant to keep drive-by trolls from flooding the system with low-quality text.

When a contributor submits a note, it does not appear on the post straight away. Instead, the draft enters an internal queue labelled Needs More Ratings. Other contributors, all of whom see the note privately, rate it “Helpful,” “Somewhat Helpful,” or “Not Helpful,” Follow-on screens prompt the contributor to provide reasoning for their rating. From those votes the software calculates two scores. The first is a simple helpfulness ratio: if at least 40 percent of raters say the note helps, it clears the “currently rated helpful” bar. The second test is where Community Notes departs from ordinary up-voting: the algorithm checks whether the people who liked the note usually disagree with one another on other notes. This is the so-called “bridging” rule.

Only when both gates open does the status flip to Currently Published and the note becomes visible to every X user under the original post. If either score later falls—because new raters find errors, or because the ideological mix narrows—the note automatically disappears again until the balance is restored. The same happens if a separate policy check detects personal data, hate speech or other rule violations: the note is held for human review even if its scores look perfect.

Readers, therefore, see a note only after two types of agreements have been reached: most contributors find it helpful and at least two different “view-point clusters” reach that judgment together. X publishes the underlying code and a daily data dump of every note and rating, so outside researchers can verify that the thresholds are being applied as advertised. 

The two checks prevent a partisan, one-sided push from being mistaken for a genuine consensus. They also explain why some clearly-written notes with very high helpfulness scores never surface: if all of the raters come from the same side of the platform’s ideological map, the bridging score stays low and the note remains stuck in limbo. In large language communities, such as English, Spanish, and Japanese, diverse agreements are easy to come by. In smaller cohorts, especially where contributors share similar political leanings, good notes can wait indefinitely for the stray dissenting vote that will push them over the line. That tension between accuracy and diversity is at the heart of Community Notes’ power and its most persistent growing pain.

Key Findings

  • Severe linguistic imbalance: Out of 1.85 million public Community Notes, only 1,737 are written in South Asian languages—barely 0.094% of the corpus, despite the region accounting for ~25.33% of the world’s population and ~4.9 % of X’s monthly-active users. The largest cohort of South Asia language notes, in Hindi, still represents less than five-hundredths of one percent of all notes.
  • Election weeks drive nearly all activity spikes: Historically, weekly counts for Hindi, Urdu, Bengali, Tamil, and other languages remain in single digits until a national or state poll approaches, then jump 10- to 20-fold for approximately two to three weeks before falling back to their original rates. Isolated non-election spikes confirm the pattern. Most South Asian language authors engage only when politics is on the ballot or a sensational local rumor spreads.
  • Structural Barriers to Community Notes Visibility in South Asian Languages: Fewer than 40% of South Asian language drafts ever meet the “helpful + bridging” requirement, compared with ~65% for English. Notes often fail for non-content reasons—no divergent-cluster endorsement, not enough raters—showing that low reviewer density, not poor note quality alone, throttles coverage. The requirement that those who rate the notes must have disagreed on other tweets is seemingly an arbitrary one, which fundamentally misunderstands what ‘neutral’ community notes should be.
  • Misuse attempts reveal onboarding gaps: Case studies below show contributors submitting notes that contain personal insults (“Italian-mafia supporters”) or blanket abuse of public institutions (“every last one is a haram-khor [parasite/cheat]”). Although both examples were ultimately blocked—one by the helpfulness screen, one by the bridging test—the effort and latency expose weak guidance on tone, sourcing and scope, especially in polarised environments.
  • English-only interface and opaque crisis rules sap trust: The Community Notes interface, guidelines, and sign-up flow are still heavily English-centric, creating barriers for contributors who write in Hindi, Urdu, Tamil, Nepalese, Bengali, and other South Asian languages. X has also not released a crisis-response playbook explaining how Community Notes would function during floods, communal violence, or national elections.
  • Low coverage enables unchecked misinformation: The scarcity of Community Notes in South Asian languages leaves significant informational gaps, especially in fast-moving or high-stakes contexts. With few localized corrections and limited reviewer engagement, misinformation in these languages faces less resistance, allowing false narratives to circulate more freely than in English or better-covered regions.

Recommendations for Improving Coverage in South Asia

Several actionable steps targeting coverage gaps, contributor onboarding, and neutrality safeguards can strengthen the Community Notes program in South Asia (with lessons applicable to similar efforts on other platforms),:

  • Aggressive expansion of the contributor base in local languages: South Asian languages account for far less than one-tenth of one percent of all Community Notes, so growth must start with sheer head-count. X should launch sustained recruitment drives focused on Hindi, Bengali, Tamil, Urdu and other major languages. Tactics include in-app banners aimed at people who frequently report or fact-check content, direct invitations to journalism schools and fact-checking NGOs across the region, and prompts to heavy contributors on adjacent platforms. Because note volume spikes during elections, recruitment should ramp up six to eight weeks before each national or state poll; strengthening contributor numbers in that window, or even offering a “fast-track” onboarding flow, will ensure high-quality notes reach voters when they matter most. Crucially, outreach must seek ideological, regional and rural-versus-urban diversity so that no single bloc dominates, say, Hindi notes.
  • Improved onboarding and training: Simply adding contributors isn’t enough; they should be well-prepared to write high-quality, unbiased notes. X can improve the onboarding process with localized guidance: for example, short tutorials in Hindi, Urdu, Tamil, Nepali, Bengali, etc. that emphasize neutrality, sourcing evidence, avoiding opinionated language, and abiding by platform rules. Providing example “model” notes in each language (perhaps curated from the best notes so far) could set a standard. Additionally, new contributors might go through a probation phase where their notes are reviewed more stringently (by algorithms or experienced community members) for tone and accuracy. This can filter out bad habits early. Given the instances of slur usage and partisan phrasing observed in some notes, it’s clear that many users need better orientation on how to write a Community Note that helps readers rather than attacking someone.
  • Interface and visibility improvements for local notes: The Community Notes interface could be enhanced to support multilingual context and encourage cross-language collaboration. One challenge in low-volume languages is getting enough reviewers; one idea is to use machine translation to show notes to speakers of other languages for rating purposes. For example, a Hindi, Bengali or Nepali note could be machine-translated into English and shown to regional contributors to rate its helpfulness. This might help overcome the scarcity of native Hindi-speaking raters and ensure that notes meet a broader consensus (as well as catch any blatantly false or policy-violating content). Additionally, X can make it easier for users to discover notes in their language—perhaps a filter to view “Notes in your language,” or notifications when a note is added to a trending local post. By increasing the visibility of Community Notes among South Asian language speaking users, more people will be motivated to trust and participate in the process.
  • Localized moderation and oversight: Given the nuanced cultural and political contexts in South Asia, it may be wise for X to involve regional experts or moderators (at least in an advisory or emergency capacity) to monitor Community Notes content quality. This doesn’t mean reverting to top-down moderation—the community should still lead—but rather having a safety net to catch egregious misuse. For instance, if a note that is clearly a personal attack gains traction, a moderator or an empowered community review board could step in to remove or correct it. Alternatively, X could augment the Community Notes algorithm with additional signals: for example, a natural language civility filter that detects if a note’s text is highly emotional or contains name-calling, and either prevent it from being posted or require extra reviewer consensus. The goal is to enforce baseline civility and factuality. Striking a balance will be tricky. Over-moderation could bias the system, but under-moderation has already allowed problematic notes through. A light-touch human oversight, especially for extreme cases in languages that X’s core team might not fully understand, could improve integrity.
  • Create and publish a transparent crisis-response protocol: Other large platforms, such as Meta and YouTube, maintain publicly described “rapid response” or “election command” teams and issue blog posts before major events detailing the safeguards they will deploy. By contrast, X has disclosed almost nothing about how its own crisis-response protocol works; even the European Commission had to request access to that information formally under the Digital Services Act. A concise public playbook, updated yearly, should state what qualifies as a crisis, which internal team is paged, and what temporary rule changes will apply in affected languages. Even a modest level of transparency would reassure South Asian users and partners that Community Notes will scale predictably, not arbitrarily, when trustworthy information is most critical.
  • Algorithmic adjustments for fairness: The note ranking and publication algorithm may need tweaks for multilingual contexts. Originally, Community Notes relied on a scoring system that seeks consensus from people “who don’t usually agree” to avoid echo-chamber approvals. In South Asia, the notion of “different perspectives” might map to political or communal divides (e.g., ensuring both pro-government and opposition-leaning contributors find a note helpful). X could consider explicitly modeling this by clustering contributors based on their rating patterns on known divisive topics, then requiring that a note has cross-cluster approval to be marked helpful. Another approach is to temporarily lower the helpfulness threshold for underrepresented language notes to get the system bootstrapped. Currently, a Hindi note might fail to be published simply because there are too few Hindi speakers rating it, not necessarily because the note is poor. Adjusting thresholds or providing extra visibility to new notes in those languages (so they can garner the needed votes) would prevent good notes from languishing unseen. Essentially, the algorithm must be calibrated to the reality of smaller, nascent communities so it doesn’t unintentionally stifle them.
  • Cross-platform and community partnerships: The problem of misinformation in regional languages is not unique to X. Collaboration could help. X might partner with fact-checking groups and initiatives across South Asia, encouraging them to use Community Notes as an additional channel for their debunks. If respected fact-checkers occasionally act as contributors or advise contributors, it would raise the bar for note quality. Likewise, Meta’s interest in Community Notes-like systems could pave the way for knowledge sharing—for example, developing common standards for community fact-check notes in non-English languages, or even a shared database of multilingual fact-check notes that both platforms draw on. While Meta and X are competitors, when it comes to safeguarding election integrity and public discourse, some alignment on best practices can benefit all (indeed, there have been cross-platform misinformation task forces for past elections).
  • User education and trust-building: Finally, to make Community Notes truly effective in South Asia, the general user base needs to trust and pay attention to them. X should invest in user education campaigns—for example, during elections or other misinformation-prone events, prominently highlight Community Notes on the platform (through “Learn more about Community Notes” banners or prompts). X should clearly explain that notes are collaboratively written by the community, and invite users to rate notes for quality. When users feel they have a stake in the process (even just by rating notes rather than writing them), it improves perceived legitimacy. Also, success stories should be publicized: if a viral piece of fake news was debunked by Community Notes, showcase that fact. This can shift the narrative from “random users write these notes” to “the community together can stop misinformation,” thereby encouraging a broader culture of participation. Over time, as neutrality measures take effect and obvious biases are curbed, hopefully the reputation of Community Notes will improve among South Asian online communities.

Download the full report via the link above.