Why Social Media Platforms Should be Allowed to Moderate Content: An Analysis of the Circuit Split Arising from Recent First Amendment Litigation

Social media platforms’ content moderation—including censorship— should qualify for First Amendment protection. A recent circuit split between the Eleventh and Fifth Circuits brought these key issues to the forefront by demonstrating the tension between the free speech rights of platforms and those of their users, raising concerns about platforms’ power to shape public discourse and government limitations to bridle this power. In the 2023 term, the Supreme Court granted certiorari to hear NetChoice v. Paxton, where the Fifth Circuit upheld a Texas statute that prohibited social media platforms from censoring users or their posts based on viewpoint or geographic location. Conversely, the Eleventh Circuit, NetChoice v. Attorney General of Florida, revoked a similar Florida law that prohibited platforms from censoring or promoting speech by or about political candidates, finding that it impermissibly compelled private companies’ speech. 

The Supreme Court should resolve this circuit split by affirming the Eleventh Circuit and reversing the Fifth Circuit for two reasons. First, the Texas law unconstitutionally interferes with platforms’ First Amendment-protected editorial discretion in content moderation. Second, Section 230 arguably also preempts the Texas law.

I. The Fifth Circuit mistakenly held that platforms’ content moderation decisions are unprotected under the First Amendment.

Based on the ruling in Paxton, the Fifth Circuit held that platforms’ content moderation decisions do not constitute speech nor editorial discretion. Further, the court found that the First Amendment does not protect editorial discretion unless other speech rights are implicated. It reasoned that the platforms’ decisions to “censor”—including blocking, banning, deplatforming, demonetizing, de-boosting, and denying visibility—are at best expressive conduct, rather than speech, because the platforms are eliminating speech rather than promoting or protecting it. The Fifth Circuit came to the narrow conclusion that the statute, Texas House Bill 20, merely regulates the platforms’ conduct. It acknowledged that while the state cannot compel speech, it “can regulate conduct … to host, transmit, or otherwise facilitate speech.” It found that this regulation on conduct does not compel the platforms to speak, prohibit them from speaking, or otherwise force them to accommodate messages in a way that affects their speech rights. According to the court, platforms have a “virtually unlimited” ability to host others’ speech. Therefore, the statute does not compromise their speech rights.

The court’s decision to classify content moderation as conduct rather than speech is unpersuasive. The court mistakenly concluded that because platforms use algorithms to screen for spam or obscene content and only moderate content post-publication, platforms are not exercising editorial discretion. Additionally, it asserted that platforms’ moderation decisions are not speech, because Congress determined that platforms should not be treated as publishers or speakers of third-party content based on Section 230 of the Communications Decency Act, a federal statute that immunizes platforms from liability for third-party posts on their sites. The court concluded that even if platforms’ “censorship” decisions are exercises of editorial judgment, the First Amendment does not protect their discretionary choices. The Fifth Circuit interpreted Supreme Court precedent to treat editorial discretion as a “relevant consideration” rather than its own category of First Amendment-protected expression. 

II. The Supreme Court should find that the Fifth Circuit erred in upholding the Texas statute.

The Fifth Circuit’s holding is incorrect on several fronts. First, content moderation constitutes editorial discretion and speech. Whether the platforms’ content moderation team makes decisions directly or uses algorithms to assist their moderation is irrelevant; the platform still decides which third-party content to present to users. The Fifth Circuit mistakenly characterized these decisions as mere censorship, ignoring the other facets of content moderation. The Fifth Circuit’s panel failed to grasp a key distinction between censorship and moderation: platforms’ content moderation activities eclipse mere censorship and involve specialized curations and presentations of content to users. Platforms express their own point of view when they make hosting determinations and present curated compilations of others’ speech.

Second, the First Amendment protects platforms’ editorial discretion. As noted by Fifth Circuit Judge Southwick’s partial concurrence and dissent in Paxton, the Supreme Court cases considered by the majority reflect a recognition of editorial discretion as First Amendment-protected activity beyond a mere “relevant consideration” in assessing regulations. For example, in Miami Herald Publishing Co. v. Tornillo, the Court stated that it did not see “how governmental regulation of this crucial process” of editorial control and judgment was consistent with the First Amendment. In a 1994 Supreme Court ruling, Turner Broadcasting Systems v. FCC, the Court stated that cable operators “engage in and transmit speech” when they exercise editorial discretion to select programs and stations. Along with Judge Southwick, the Eleventh Circuit interpreted these cases to recognize editorial discretion as a First Amendment-protected category of speech. Because this precedent clearly acknowledges editorial discretion as a protected class, the Court should find that the platforms’ moderation is protected as well.

Lastly, Section 230 preempts Texas HB 20 by providing platforms with immunity for their content moderation decisions. Section 230 states that platforms shall not be held liable for “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user” considers objectionable or inappropriate, even if the content is constitutionally protected. Subsection (c)(3) asserts that no state or local laws may impose liability inconsistent with Section 230. The Fifth Circuit pointed to Section 230 to reinforce its argument that platforms are not engaged in First Amendment-protected expression. However, the ruling failed to reconcile how Texas’ HB 20, establishing liability for specific content moderation decisions, is consistent with Section 230. 

Admittedly, the Fifth Circuit offers a convincing policy argument for limiting platforms’ censorship abilities: platforms serve as the main stage for connection and speech in today’s society. Because they hold so much power to shape public discourse, their content moderation decisions may be viewed as a threat to individuals’ First Amendment rights. Protecting individuals’ freedom to engage in public discourse is important. However, governmental interference with platforms’ content moderation decisions violates the same rights that Texas HB 20 sought to protect. While this interference is spurred by concerns that specific platforms have too much power over large swaths of citizens’ speech, the Fifth Circuit’s majority fails to recognize that if users are unhappy with platforms’ algorithms and content moderation policies, they are free to use other platforms.

Conclusion

Platforms engage in editorial decisions when they moderate third-party content which is protected by the First Amendment. As such, platforms should be free to choose what content appears on their sites, and users should be free to choose which platforms they use based on those policies. While promoting free speech principles is undoubtedly admirable, state governments should realize they are tasked with not abridging speech—not abridging some entities’ speech for the promotion of others. 

Lana Wynn

GLTR Staff Member; Georgetown Law, J.D. expected 2024; Texas Christian University, B.S. in Journalism: News and Media Studies 2019.