Recent rulings in the United States are challenging the longstanding assumptions that Big Tech is effectively beyond the reach of meaningful accountability. In March 2026, juries in New Mexico and California found Meta and Alphabet liable in landmark cases tied to harms arising from their platforms, especially affecting young people, including design features that lead to addictive and compulsive use, and exposure to harmful content. These are the trends that researchers and activists have been sounding the alarm about for years. The recent decisions highlight how courts are approaching platform power as a set of business models and design choices that can, and should, be scrutinized.
The violence that proliferates across social media platforms is as much a feature of a deliberate architectural strategy as it is a content moderation failure. This strategy focuses on the algorithmic amplification of high-engagement content as a step towards incentives in the form of ad revenue. And content moderation, without any intervention at the level of business model or algorithmic design, is a reactive response that does not prevent harm. So the question of user safety cannot be separated from the economic logic that structures the foundations of their businesses.
The March 2026 rulings matter precisely because, for a long time, this level of scrutiny seemed out of reach. Tech companies have spent years accumulating a well-documented record of harm, including interference in democratic processes, amplification of content inciting violence against marginalized communities, the systemic spread of disinformation, and consistent prioritization of profit over user safety and fundamental rights. None of this is unknown to the public, as researchers around the world have spent their careers documenting the exact nature, applications, and impacts of these technologies. Yet every time these companies were called into courtrooms and senate halls, they walked away with only a warning or a fine worth a fraction of a day’s revenue.
Tech companies have consistently argued that regulation stifles innovation, while positioning themselves as moving too fast for governments to catch up. That argument has been reinforced by a regulatory environment in which lawmakers often appear outpaced by the speed of technological change and exhibit a limited understanding of how these platforms and their systems actually function. As a result, the governance gap continues to widen, with platforms operating at a larger scale and relatively little investigation into the structures that produce known harms.
This gap has been particularly visible in high-profile hearings with tech executives like Mark Zuckerberg and Shou Zi Chew of TikTok before the US Senate. Rather than probing business models, data extraction and exploitation, or algorithmic harm, questioning is often surface-level or misdirected, reflecting a broader lack of preparedness to hold tech companies meaningfully accountable.
Against this backdrop, the recent rulings begin to feel significant as they suggest that courts may be more willing than legislatures to engage with the underlying economics of platforms, including how engagement-driven models shape user experience and harm. Rather than treating harmful content or addictive design as independent issues resulting from users’ behavior online, these cases frame them as predictable outcomes of systems built to maximize attention and profit, and the solution may be only reforms to platform infrastructure. This shift, even if partial, is the beginning of more grounded forms of accountability.
Limits of Legal Victory
However, these decisions are not, in themselves, a solution to the deep-rooted problems that tech companies perpetuate on and off the internet. Legal victories against platform harm often take on a second life beyond the courtroom and do more than assign liability in specific cases. They set the direction for how future laws might be designed and enforced, and on what issues, while creating a legal and political vocabulary that governments can draw on. They also help citizens become more aware of their rights and platform practices, and take a greater interest in how governments respond to the developments. In a sense, these rulings act as early templates for governance. What remains less clear is how these emerging frameworks will be taken up by governments in different political contexts, and what kind of power they will ultimately consolidate or challenge as a result.
In more rights-respecting contexts, such precedents can open the door to stronger accountability measures. But in others, the same language of “protecting users” and “preventing harm” can be used to justify far more expansive controls over digital spaces and civil liberties. These may include increased surveillance and censorship, stringent data controls, and broader powers to regulate speech.
So it is worth monitoring how the US responds to these decisions as it pushes for child-safety regulations awaiting Senate approval. The Kids Online Safety Act (KOSA), for instance, emphasizes a “duty of care,” which civil rights organizations such as the Electronic Frontier Foundation have flagged as a threat to rights by potentially enabling censorship and suppressing lawful speech. In addition, child safety regulations that have already been passed suggest direct implications on people’s privacy in digital spaces. Age verification, for instance, is gaining traction among both platforms and lawmakers.
Half of the US now requires people to scan their ID and submit biometric data to access parts of the internet, creating a harmful censorship and surveillance regime threatening people’s privacy. Considering this, the new rulings must alert us to look out for policy impulses that shift responsibility onto users to prove innocence and legitimate identity (curtailing the right to be anonymous) rather than compelling platforms to restructure their harmful business models or holding governments accountable for enforcing meaningful platform regulation.
This is where the distinction between accountability and control begins to blur. Governments should be pushing for structural changes, such as privacy-by-design, transparency into how algorithmic amplification works, putting limits on data extraction, or mandating structural reforms to engagement-driven design. Instead, they often focus on mechanisms that expand their own authority over platforms and users alike. As a result, users’ speech, data, rights, and liberties become subject to increased scrutiny in the name of safety.
Exported Control
These developments must be read beyond the jurisdictions in which they emerge. Legal shifts in the Global North often travel quickly, shaping conversations on regulation and setting informal benchmarks for what accountability should look like in the Global Majority countries. But as these frameworks move, they do not always carry the same safeguards or intent. Instead, they are often selectively interpreted, especially in contexts where governments are already inclined to expand control over digital spaces. In such cases, the language of platform accountability can be repurposed to legitimize greater state intervention in online speech and infrastructure, rather than challenge corporate power.
For instance, in my years of advocating for reforms to online regulatory frameworks in South Asia, the United Kingdom’s Online Safety Act has been cited as a model worth considering. Similarly, during consultations on Pakistan’s proposed data protection legislation, the European Union’s General Data Protection Regulation (GDPR) has been cited as a gold standard. What these discussions fail to consider is that these frameworks are not only imperfect, but they also carry documented harms within their own jurisdictions. The UK law, for example, has faced sustained criticism for undermining end-to-end encryption and for content moderation provisions that disproportionately impact marginalized communities.
In countries like Pakistan and India, existing legislation has been systemically used against women, gender and religious minorities, journalists, dissidents, activists, and lawyers. So importing regulatory models from elsewhere with significantly different political and social contexts and known negative implications means multiplying the harms of supposed golden standards in jurisdictions they are not suited to or designed for. This kind of policymaking, which is common in many countries in the Global Majority, leads to the state being granted additional instruments of control over citizens. The question then is whose interests these laws serve when mirrored in a political environment where the primary threat to digital rights is the authorities themselves.
However, rejecting these frameworks as models does not mean rejecting regulation altogether. In fact, regulation is one of the few tools capable of imposing binding obligations on companies that have otherwise operated without meaningful accountability. Instead, it means demanding regulation that is built differently, one that begins with the communities most exposed to platform harm rather than with the institutional priorities of governments seeking greater oversight of digital spaces.
Meaningful accountability for tech companies would look like legally mandated transparency in how algorithmic systems are designed and what they are optimized for, enforceable limits on data extractions, structural prohibitions on engagement-driven design that is known to amplify violence and disinformation, and better safeguards against the promotion of harms, especially gendered violence and political polarization across the world. These interventions directly address the business model rather than expanding the platforms’ or the state’s reach over users and their rights.
What happens next in the US will matter far beyond its borders. The recent rulings against Meta and Alphabet, significant as they are, might lead to a wave of reactive legislation in which governments move quickly to appear responsive to public needs without doing the harder work of targeting corporate structure and power. In many Global Majority countries, where US regulatory developments carry weight, this is particularly concerning as lawmakers may reach for the most visible and accessible elements of whatever emerges from these rulings, stripping them of whatever safeguards existed in the original context, and repurposing them to expand state control over digital spaces and the people who inhabit them. Accountability for platforms should not come at the cost of people’s civil liberties. This is why the conversation cannot be left to governments and corporations alone – civil society, advocates, and affected communities must be meaningfully engaged in defining what accountability means, who it is meant to protect, and what it must never be allowed to become.