Social media apps on a smart phone

For much of the 21st century, social media has been hailed as a revolutionary force, giving billions of people a platform to share ideas, organize movements, and challenge authority. From the Arab Spring to the #MeToo movement, platforms like Facebook, X (formerly Twitter), and YouTube have been instrumental in amplifying voices that might otherwise have been silenced. Yet as their influence has grown, so too has the scrutiny over their role in moderating content, deplatforming users, and shaping public discourse.

The question at the heart of the debate is simple: Should a handful of technology companies wield the power to decide what can and cannot be said online? For critics, the answer is an emphatic no. The ability to control the flow of information, they argue, is a power once reserved for governments, yet today it rests in the hands of private corporations driven by profit, algorithms, and shifting social pressures.

The New Gatekeepers

Social media platforms are no longer just tech companies; they are, in many ways, the gatekeepers of modern speech. Their algorithms determine what content is promoted and what fades into obscurity. Their policies decide which voices are allowed to speak and which are silenced. And their influence stretches far beyond national borders, shaping political conversations from Washington to New Delhi.

The 2024 U.S. presidential election has reignited concerns about the role these companies play in shaping democratic discourse. Earlier this year, X settled a lawsuit with former President Donald Trump for $10 million after he sued the platform over his deplatforming in 2021. While Elon Musk reinstated Trump’s account after acquiring the company in 2022, the legal battle highlighted a deeper issue: Should private companies have the right to bar political figures from speaking on their platforms?

Musk himself has been an outspoken critic of what he calls the “censorship-industrial complex,” accusing major tech companies and governments of working together to suppress certain viewpoints. His tenure at X has been marked by a dramatic shift toward what he calls “free speech absolutism,” but critics argue that his decisions—such as reinstating accounts previously banned for hate speech—have created a platform rife with misinformation and extremism.

The Algorithmic Control of Public Opinion

Beyond outright bans, social media platforms exercise enormous power through their algorithms, which decide which posts are seen, shared, and promoted. The average user is not seeing an objective reflection of what people are saying online; they are seeing a curated feed, shaped by proprietary machine-learning models designed to maximize engagement.

This “algorithmic curation” has profound effects on public discourse. A recent study published in The Atlantic describes how social media users are increasingly trapped in “algorithmic cages,” where their views are reinforced by content tailored to confirm their biases. Conservatives see conservative content. Progressives see progressive content. The result is a fragmented public square where shared reality becomes increasingly elusive.

This phenomenon extends beyond political discourse. During the COVID-19 pandemic, platforms scrambled to moderate misinformation about vaccines and treatments. Facebook partnered with public health organizations to push authoritative information, while YouTube and Twitter removed content that contradicted official guidelines. But as scientific understanding evolved, some of the content that had been labeled misinformation—such as discussions about the origins of the virus—later turned out to be valid areas of inquiry.

The challenge, then, is not simply about moderating harmful content; it’s about determining who gets to define what is “harmful” in the first place. And when those decisions are made by opaque algorithms and corporate policy teams, accountability becomes a serious concern.

Government Pressure and Free Speech

Even as platforms claim to be independent arbiters, they are not immune to government influence. In recent years, there has been increasing scrutiny over the extent to which social media companies collaborate with state actors to shape content policies.

Documents released in late 2023 as part of the “Twitter Files” series—an internal investigation made public by independent journalists working with X—suggest that federal agencies, including the FBI and the Department of Homeland Security, have played a role in content moderation decisions. While some argue that this collaboration is necessary to combat threats like foreign disinformation campaigns, others see it as a dangerous encroachment on free speech.

Internationally, the relationship between Big Tech and governments is even more fraught. The European Union’s Digital Services Act, which went into effect in 2024, imposes strict regulations on how platforms moderate content. The law requires platforms to remove illegal content swiftly, but critics worry that it will lead to overzealous censorship, particularly of political dissent.

The U.S. government, meanwhile, has been vocal in its opposition to these regulations. The Trump administration has argued that the EU’s policies disproportionately target American tech firms, and some U.S. lawmakers have called for retaliatory measures. But at home, there is no consensus on what the role of government should be in regulating social media. Some, like Representative Jim Jordan, have aligned themselves with tech executives like Musk, advocating for minimal intervention. Others, including Senator Elizabeth Warren, argue that social media companies have become too powerful and should face stronger antitrust regulations.

The Future of Online Speech

As social media platforms continue to shape global discourse, the debate over their role in free speech is unlikely to subside. The landscape is shifting rapidly, with new platforms emerging and older ones adapting to political and social pressures.

The fundamental issue remains unresolved: If social media is the modern public square, who should be in charge of regulating speech? Should it be governments, with all the risks of state censorship? Should it be tech CEOs, with their ever-changing policies? Or is there a way to create a system that protects open discourse without enabling harm?

The answers will have lasting consequences not just for digital platforms, but for the very nature of free speech in the 21st century.