Brett Jordan for Pexels

How the Supreme Court Could Silence the Internet

This week, the Supreme Court heard Gonzalez v. Google, a case that could radically undermine one of the Internet’s most sacrosanct laws: Section 230 of the Communications Decency Act (47 U.S.C. § 230). Reynaldo Gonzalez sued Google after his daughter died in a 2015 ISIS attack in Paris. He claims that Google promoted ISIS’ recruitment through its YouTube algorithm, which targeted and suggested ISIS videos to users who were interested in that content. Gonzalez alleges that Google helped create the infrastructure for terrorist members to recruit, communicate, and inspire other attacks to take place.

While Section 230 has flaws and requires reform, there is a real fear that the Supreme Court could go too far in stripping its protections away; enough to severely harm the way most of the Internet functions. In this piece, I argue that Section 230 should be amended, rather than completely repealed, to incorporate a “duty of care” standard to online service providers.

Section 230

Section 230 protects websites from being held liable for content their users post. Saliently, the law states that “[n]o provider or user of an interactive computer service (i.e., a website) shall be treated as the publisher or speaker of any information provided by another information content provider.” If, for instance, a website user posts a defamatory story on the platform and the website hosting that story is sued, Section 230 shields the website from liability. The law recognizes a difference between the user posting content and the website hosting that content; the website is merely a messenger, not the speaker.

This law has been heralded as the cornerstone of growth for many Big Tech companies ever since the law’s enactment in 1996. Websites such as Facebook, Twitter, Reddit, YouTube, and more have been able to focus on innovation, while relying on Section 230 to protect their sites from a flurry of lawsuits based on content posted on their sites. Many have criticized these sites, saying their monitoring has either been too harsh or too lackadaisical, and now those criticisms have come to the forefront in Gonzalez.

The Implications of Gonzalez

Calls for the reform of Section 230 have been echoing on both sides of the political spectrum. Some say Section 230 lets companies remove too much content, while others say that companies monitor too little; either way, many are unhappy with the current status of the law’s implementation. Politicians are not the only ones voicing discontent: Justice Thomas, for instance, has written on numerous occasions about reconsidering the scope of Section 230. In particular, Justice Thomas argues that these websites should be treated as “common carriers” and that Section 230 may violate the First Amendment due to pre-empting state laws. His perspective may finally take root in future implementation in Gonzalez.

Many fear that Section 230 will be severely restricted in its protection—or even be repealed outright—by the conservative-leaning Supreme Court. Take Reddit, for example: users can interact with posts by upvoting or downvoting them; upvoting will make the content more easily viewable by others and vice versa. If the Court were to further restrict or repeal Section 230, websites like Reddit would need to alter their entire infrastructure, becoming more risk-averse in monitoring potentially “dangerous” content that algorithms may recommend to users. In other words, websites would be far more aggressive in taking down content users post; this could cause harsh treatment on controversial speech or speech by historically marginalized users.

Gonzalez will consider what exactly it means to “recommend” content. While Reddit’s example of upvotes and downvotes is easy to grasp, it also can implicate other websites that rely on opaque algorithms to recommend content such as YouTube and TikTok. If a user watches “illegal” content on YouTube, such as ISIS videos as in Gonzalez, is the website involved in the active “recommendation” of it to other users by contributing to the algorithm? If YouTube is to be held accountable, at what point should a line be drawn between YouTube being a publisher versus a recommender?

The eventual majority opinion in Gonzalez or any possible Congressional reform may establish new guidelines in interpreting the law in a manner that is inconsistent with its underlying principles of proliferation and flexibility of Internet innovation and engagement. This fear may undeniably be realized if the Court chooses to weaken or even repeal Section 230 altogether. The Court may take its own view diverging from the initial goals set by Congress.

Recommendations

Section 230’s core principle of encouraging innovation on the Internet should not be forgotten. In my opinion, the law has done well in stimulating open discussion and honesty in online engagement. Websites as large as Facebook and Twitter, despite their immense resources, cannot practically monitor each individual post. And even with the most sophisticated algorithms as arbiters, illegal content will slip through the cracks and, worse, remove harmless content.

At the same time, however, I believe sites have been given too much leniency under the current law. Even Mark Zuckerberg, founder of Facebook, states that some revisions to Section 230 should be considered. In my view, the most sensible alteration would be to revise Section 230 to entrust these sites to hold a “duty of care” in moderating content. This would retain flexibility in user content, while holding sites to a heightened standard of monitoring to ensure illegal activities—ranging from underaged pornography to terrorism—are purged.

A “duty of care” standard has merit for two reasons. First, it considers the size of websites involved. A “reasonableness” standard would permit courts to reconcile their decisions in monitoring relative to their size; for instance, a startup would not be held to as rigid a standard as Instagram. Second, it will encourage sites to be more proactive in monitoring their content. Section 230’s shield does not protect blatant negligence or willful facilitation of harmful content. As this reasonable duty of care has its roots in common-law and has been recognized by other courts, there is a chance that the Supreme Court itself will recognize such a standard embedded in Section 230.

Any reform on a paramount law is precarious. These changes must be precise: the end goal is to make websites more accountable, while meticulously ensuring the liberties that Section 230 offer remain intact. Whether that will be through Congressional or judicial action is unclear, but if the Supreme Court does decide to limit the law, its limitation should be concise; a limit too broad could create more harm than good by limiting what speech Internet users can openly express without fear of prosecution.

John Eagle Miles

Georgetown Law Technology Review Senior Notes Editor
Georgetown Law, J.D. expected 2023; University of South Carolina, B.A. 2019.