The Future Is Here
We may earn a commission from links on this page

Will the Supreme Court End Social Media as We Know It This Week?

The court will determine if the same protections apply to social networks' algorithmic recommendations as to individuals' posts.

Image for article titled Will the Supreme Court End Social Media as We Know It This Week?
Photo: Chip Somodevilla (Getty Images)

The Supreme Court’s ruling on a pair of ISIS terrorism cases this week will rest on the nine justices’ interpretation of 26 words written in 1996 that collectively have come to define the nature and scope of modern online expression. The ruling could fundamentally alter the types of content social companies are held legally liable for and could force them to re-examine the ways they use recommendation algorithms to serve users content. Though that sounds esoteric, tech companies being liable for your posts would drastically change your everyday experience on social media.

Those 26 words, officially known as Section 230 of the Communications Decency Act, have been referred to as the “backbone of the internet” by supporters and as an overly broad digital alibi hamstringing accountability by its opponents. In a nutshell, Section 230 both prevents online platforms from facing lawsuits if one of its users posts something illegal and shields them from legal liability for moderating their own content. In 2023, Google, Meta, Amazon, and Twitter’s ability to boost certain content, curate stories, downrank harmful posts, or ban belligerent assholes without constantly looking over their shoulder for a barrage of multi-million dollar lawsuits—that’s all thanks to 230.

Advertisement

“This decision could have a devastating impact on online expression,” Alexandra Reeve Givens, president and CEO of the Center for Democracy and Technology, said in a statement sent to Gizmodo.

Advertisement

Gonzalez v. Google and Twitter v. Taamneh

Two recent SCOTUS cases, Gonzalez v. Google and Twitter v. Taamneh, both focus on how social media companies handle terrorist content and on whether they are liable, both under Section 230 and under the Anti-Terrorism Act. The first case stems from a lawsuit filed by the parents of a 23-year-old college student named Nohemi Gonzalez who was killed in a brutal 2015 Paris ISIS attack that left 129 people dead. Gonzalez’s parents sued Google, alleging it aids and abets terrorists like the ones responsible for their daughter’s death when it promotes videos created by terrorists in its YouTube recommendation algorithm. YouTube removes most terrorist content quickly, but like any platform, it can’t catch every example.

Advertisement

The Taamneh case alleges Twitter aided and abetted terrorism when it failed to sufficiently take down ISIS content on the platform following a 2017 attack. The petitioners, in both cases, are trying to convince the court that Section 230 doesn’t apply to the algorithmic recommendation of posts on social networks.

“A Court decision excluding ‘recommendations’ from Section 230’s liability shield would sweep widely across the internet, and cause providers to limit online speech to reduce their risk of liability, with harmful effects for users’ ability to speak and access information,” said Reeve Givens.

Advertisement

Google maintains it is immune from liability under Section 230. So far, two courts, a federal court in California and the 9th Circuit appeals court have both sided with Google. Rival platforms like Meta have temporarily made a truce with Google and publicly supported their argument, telling courts the sheer volume of content flooding the internet makes recommendation algorithms a basic necessity for communication online. Others, like the Electronic Frontier Foundation, have compared recommendation algorithms as the digital equivalent of newspapers directing readers toward certain content. The EFF is still in favor of allowing Section 230 protections to remain, though.

Supporters of Section 230, which includes nearly all tech platforms, say the protections as they are currently understood are crucial have helped upstarts flourish and are a key reason why the U.S. is home to the largest, most successful online platforms on Earth. A growing number of critics and lawmakers from both sides of the political aisle, however, feel it’s given platforms either too much leeway to leave up horrific, horrible content, or too much power to unilaterally silence certain voices and control online speech. In general, Democrats typically want platforms to take down more content while the right, as showcased incredibly ineptly by newly minted Twitter CEO Elon Musk, want to keep more content active. Former president Donald Trump and Ted Cruz have both called for shaking up 230. So have Joe Biden and Minnesota Senator Amy Klobuchar. Trump, however, moderates his social network, Truth Social, just like the other players in the game.

Advertisement

“The Court needs to understand how the technology underlying online speech actually works, in order to reach a thoughtful ruling that protects users’ rights,” Reeve Givens added.

How could the internet change SCOTUS guts Section 230?

Put plainly, the Supreme Court’s decision here could radically alter the way content is moderated online and how everyday users experience the internet. Supporters of the status quo, like the Center for Democracy and Technology, say a ruling in favor of the petitioners could have trickle-down effects for a wide range of companies throughout the web, not just large social media platforms. Under that new framework, search engines, news aggregators, e-commerce sites, and basically any website that serves content to users could face increased liability, which could cause them to severely limit the amount of content they serve.

Advertisement

“The court could easily take this, and then rule in ways that affect big questions not actually raised by the case,” Stanford Cyber Policy Center Platform Regulation Director Daphne Keller told Axios last year. “It could mean news feeds get purged of anything that creates fear of legal risk, so they become super sanitized.”

Other critics, like the EFF, worry this reality could force companies to engage in severe levels of self-censorship. Without strong 230 protections, supporters say social media companies may opt to simply avoid hosting any important, but potentially controversial political or social content. In a extreme example, platform could simply scrub any content using or related to the term “terrorist” simply to avoid being implicated under ani-terrorism laws.

Advertisement

On the flip side, much to conservative lawmakers’ chagrin, the ruling could also potentially result in platforms feeling compelled to more aggressively remove certain types of speech to the point of over-enforcement. Platforms could abandon using ranking algorithms to serve up connections altogether, which could make it much more difficult for users to find relevant information and make already grating online experiences even worse.

Supporters of the plaintiffs focus on more immediate fears. Big Tech’s arguments are nitpicky and theoretical in nature, they say, and fail to adequately acknowledge present, real-world harm. Recommendation algorithms inflame that harm, creating a need to reduce 230’s scope, they say. The nine justices themselves have been tight-lipped on tech.

Advertisement

“As the internet has grown, its problems have grown, too. But there are ways to address those problems without weakening a law that protects everyone’s digital speech,” an EFF Policy Analyst wrote. “Removing protections for online speech, and online moderation, would be a foolish and damaging approach. The Supreme Court should use the Gonzalez case as an opportunity to ensure that Section 230 continues to offer broad protection of internet users’ rights.”