Today’s globalized world sees young people turning primarily to social media as their main avenue for consuming news. In Malaysia, TikTok has become a popular digital platform for information-sharing. Figures indicate that TikTok had 19.3 million users aged 18 and above in the country in early 2025. Notably, three out of four internet users in the country rely primarily on social media for news, underscoring TikTok’s pervasive influence on public discourse.
The boom doesn’t just reflect media literacy, however; platforms like TikTok actively fuel extremism and polarization, a dangerous phenomenon exemplified by Malaysia’s TikTok-driven political rightward shift, demanding an urgent re-evaluation within the South Asian region and globally of social media’s role in cultivating anti-democratic societies.
Malaysian politics experienced a sharp rightward shift starting in 2020, marked by a surge in support for conservative parties such as the Perikatan Nasional (PN) coalition, whose rise was fueled by online content promoting ethnonationalism and Islamic extremism. The correlation between social media trends and shifts in political opinion is no mere coincidence. Social media platforms have become low-cost megaphones to amplify ethnoreligious narratives and conspiracy theories that inflame social anxieties.
The Roots of Malaysia’s “Green Wave”
Malaysia’s “green wave”—a conservative, Islamically oriented extremist shift, has deepened over the course of decades, stemming from constitutional divisions. From its 1957 founding, the Constitution embedded Islam as the religion of the Federation while also granting special privileges for Malays. The constitutional provisions reserved certain privileges for Muslim Malays and contributed to longstanding ethnic and religious tensions between Muslims and religious minorities, readily co-opted into contemporary political concerns and grievances. Islam was integrated into the national identity as a means of securing domestic legitimacy in an increasingly Westernized global landscape, where bureaucratic modernization was seen as a marker of progress.
Conservative coalitions strategically weaponized these structural imbalances in the digital age. The swift rise of the Perikatan Nasional (PN) coalition, marked by overt appeals to Malay-Islamic supremacy, was fueled by amplifying these long-standing ethnoreligious and socio-economic grievances through pro-Islamic social media narratives. This right-leaning propaganda often effectively targets and marginalizes Malaysia’s significant ethnic Chinese, Indian, and indigenous minority communities through narratives that depict them as threats to national identity or economic stability.
The Perikatan Nasional political coalition party rode the green wave by creating a united front of Malay-centric and Islamist parties. From holding only a small fraction of parliamentary seats in 2018, conservative parties within the coalition held 146 of the 245 total state assembly seats by 2023. They secured this victory by using platforms such as TikTok to create propaganda that lumped anti-liberal, anti-minority, and anti-establishment discourse into one cause with an appeal to the majority middle class.
Role of Social Media Platforms
Strikingly, 84% of Malaysians desire more democratic values, and 62% believe that diversity makes the country a better place. Social media interactions tell a different story, however. Platforms like TikTok reveal a concerning susceptibility to anti-democratic narratives. This contradiction is made possible through algorithmic amplifications that can subtly normalize extreme views or create an illusion of widespread support for them, influencing political shifts even as overt public support for democratic principles remains high. This dissociation between private values and pervasive online exposure carries dangerous implications for democratic stability.
TikTok has now become the conservative party coalition’s prime avenue for political messaging, surpassing other media structures due to its ease of information-sharing and consensus manipulation. The danger accompanying the platform’s popularity lies in its algorithmic patterns, which create echo chambers of information consumption. This self-reinforcing algorithm suggests similar content based on previous interactions, gradually shifting users toward increasingly hate-driven propaganda. This not only creates echo chambers, but also contributes to the homogenization of problematic political views, often without users realizing they’re subscribing to them.
Leveraging this very algorithm, PN’s campaign manipulated public consensus and influenced higher rates of conservatism, orchestrating support directly through the platform’s design. They achieved this primarily by creating artificially manufactured support to generate an illusion of social visibility and perceived majority backing, utilizing viral trend videos and hashtags that boost problematic discourse. This social influence was further amplified by often uncensored right-leaning content creators. The use of social media to spread misleading information has thus become a key tactic in influencing public opinion and interfering with elections, potentially threatening the fairness of the democratic process and giving those behind various social media accounts an unfair political edge.
Platform Accountability
The connection between misinformation and far-right extremism in Malaysia raises critical ethical questions about TikTok’s accountability for halting the spread of hate and preventing related political harm. Despite holding vast user data that powers its opaque algorithmic recommendations, TikTok’s existing community guidelines and warning labels often fail to catch subtle or coded harmful content. For instance, sarcastic or meme-based videos that mock minority groups can spread bigoted ideas under the guise of humor, while exclusionary trends, such as viral challenges that implicitly target non-Muslim or non-Malay communities, may not be flagged by automated tools.
A 2024 study by the Anti-Defamation League on Online Harassment and Hate found that globally, online propagation and hate both funnel users into echo chambers due to platforms overreliance on AI-driven moderation systems that struggle to intercept high-engagement and divisive material. Consequently, content moderation, despite community guidelines, often struggles to keep pace with the sheer volume and complexity of the evolving nature of harmful content. This allows extremist narratives to bypass detection, finding new ways to reach vulnerable audiences and further contributing to the spread of intolerance and hate, demonstrating a critical failure in the platform’s ethical and social responsibilities.
In response, Malaysia has mandated that social media platforms with more than 8 million users obtain a license to operate, aiming to combat cybercrimes such as scams, cyberbullying, and the spread of harmful content. Platforms like TikTok and WeChat have complied with this requirement, while Meta is in the process of obtaining its license. While this licensing claims to moderate harmful content, it also raises concerns for over-reaching limitations to freedom of expression. Ultimately, Malaysia’s licensing regime marks a pivotal shift toward enforcing platform regulation, but most of the burden still lies with tech companies to self-regulate.
To this end, platforms must take proactive steps towards accountability by investing in human moderation to detect nuanced harmful content spurred by the “green wave,” as well as implement transparent algorithmic changes that prioritize safety over engagement. Governments can also help by establishing more robust legislative frameworks that mandate algorithmic transparency and platform accountability, ensuring that there are substantial penalties for allowing harmful content.
Social media companies can no longer hide behind illusions of neutrality. Malaysia has already taken steps to ensure this, however the steps taken thus far towards online moderation may create new issues, giving the Malay government autonomy to censor legitimate speech. Additionally, the gaps that are created by Malysia’s failure to instill a single, comprehensive law that addresses hate speech—leaves gaps that tech companies must fill through proactive moderation and algorithmic reform. These companies must take responsibility for the engineered algorithms that produce real-world consequences, or else we may risk surrendering democracy to the curated chaos of the feed.
(Graysen Kirk is an intern at the Center for the Study of Organized Hate (CSOH), currently pursuing a Bachelor’s degree in Human Rights and Political Science at Columbia University in New York City.)