Should Social Media Go the Way of Cigarettes? Addressing Evidence of Consumer Harm

Social media use is this generation’s alternative to smoking: an addictive and unhealthy vice that ravaged society before regulators thought to act. Though its use can be pleasurable and even helpful, social media’s ill-effects suggest a stronger standard of product liability may be appropriate.

That stronger standard may be on its way. This past summer, the U.S. Surgeon General, Vivek Murthy, issued an advisory warning that social media poses a “profound risk of harm to the mental health and well-being of children and adolescents.” While the advisory noted some positive effects of social media and took care to emphasize the need for more research, it should still provide an impetus for a change in public policy posture towards a ubiquitous product that has been subjected to negligible liability or oversight.

The U.S. government’s formal acknowledgement of a popular product’s negative health effects and habit-forming tendencies—both documented by scholarship—has precedent. However, far from presenting a solution to this public health crisis, Congress inadvertently fomented it by enacting Section 230 of the Communications Decency Act (Section 230), the law that defined the Internet. Section 230, in its current form, effectively preempts litigation by obfuscating the nature of social media product liability. Yet the history of Big Tobacco indicates that such litigation may be critical to improving public health. Examining that history reveals that cumbersome and prescriptive regulation is not necessary to address a public health crisis: Congress’s role is merely to revise Section 230 to better reflect its original intent.

The Status Quo is a Lot More than “Passive Nonfeasance”

As is, the regulation of social media platforms is notoriously lax. Hoping to influence the direction of their own regulation, technology firms have largely taken it upon themselves to address distinct categories of concern, such as “privacy,” “screen time,” and “content moderation.” The emphasis on these isolated issue areas has effectively obscured the need to address social media harms in a more holistic manner.

Rather than impose affirmative obligations on social media to protect mental-health and well-being, Congress has ostensibly provided such platforms with immunity from the liability that they could face for inflicting harm. Section 230 was enacted in 1996 to protect information technology infrastructure providers from defamation claims stemming from messages transmitted using their infrastructure. Subparagraph (c)(1) provides that “[n]o… interactive computer service shall be treated as the publisher… of any information provided by another.” This has been construed to “confer broad immunity” in cases where a platform is understood to be publishing third-party content: the type of content which effectively comprises social media in its entirety. This immunity against civil liability is virtually impenetrable; Section 230 has been successfully invoked as an affirmative defense of immunity against facilitating sexual assault and the material support of terrorism. The upshot is that any attempt to hold platforms liable for harm resulting from their publishing of third-party content appears doomed to fail.

The controversial law easily survived a high-profile challenge last spring. In Gonzalez v. Google, the parents of a terror attack victim alleged that YouTube had knowingly provided substantial assistance to ISIS. The case was framed as a potential referendum on Section 230, but the Supreme Court declined to interpret the law at all, deciding the issue was not reached because the complaint’s construction of the “aiding and abetting” statute would necessarily hold defendants liable for any given act of terrorism committed by ISIS, given the large extent to which ISIS activity is facilitated by Twitter and YouTube. The crux of the holding was a characterization of the platform’s conduct as “passive nonfeasance,” insofar as no duty existed for the platforms to remove all terrorism-related content.

However, to construe social media’s role in such events as passive nonfeasance is to completely misunderstand the nature of social media.

To Fix Something Digital, First Understand Analog

The core premise of this argument bears repeating: enough research has been compiled that the U.S. Surgeon General issued an advisory identifying social media use as an urgent public health issue. Social media may be causing depression and anxiety, addiction, decreased life satisfaction, eating disorders, and more. There is a temptation to treat these negative effects as drawbacks of an otherwise “good thing” that needs to be used responsibly. The truth, however, is that these are not an array of side-effects but a singular epidemic, attributable to a class of products that have been recklessly brought to market.

In the not-so-distant past, a medical consensus formed that a popular product was having a profound harmful effect on the public’s health: the cigarette. While regulation, such as the Surgeon General’s warnings, may have played a role in driving down smoking rates, products-liability litigation was an essential catalyst of that change. Beginning in earnest in the 1980s, plaintiffs began going after Big Tobacco, ultimately holding the industry liable for hundreds of billions of dollars of damages. This movement culminated in the landmark 1998 Tobacco Master Settlement Agreement (TMSA). As evidence of the TMSA’s effectiveness in redressing a public policy problem, consider that youth smoking rates dropped from 36% in 1997 to 22% in 2003, and then down to 6% by 2019. This is not only because Big Tobacco found itself on the hook for damages; the industry also agreed to a set of regulatory obligations as part of the settlement, including advertising restrictions, prohibitions on certain business practices, and the funding of anti-smoking organizations.

These actions were premised on a simple notion: you cause a health crisis, you pay for it. That premise is fair and intuitive, but the legal system took some time to warm up to it. Out of hundreds of personal injury and wrongful death claims that began to be filed in the 1950s, not one plaintiff recovered. This ultimate reckoning was stalled in part by tactics parallel to those used by social media platforms today: pledges to self-regulate, announcements of “safer” products, and emphasis on personal responsibility for use. These political and public relations campaigns softened opinion while Big Tobacco fought tooth and nail in the courtroom, wearing plaintiffs down and getting claims dismissed. Change arrived when plaintiffs thought to shift the emphasis of their claims from injury inflicted to the industry’s intentional efforts to addict people to a product it knew, but denied, was harmful. By emphasizing the deceitful business practice, rather than the harm done by a single cigarette or even to a single smoker, class action suits allowed remediation of a systemic crisis which ultimately resulted in a major win for public health.

Supporters of social media resist the analogy to Big Tobacco for good reason. As noted previously, Surgeon General Murthy’s advisory acknowledges that social media does have mental health benefits for certain individuals: a claim of health enrichment that Big Tobacco would not be able to match. On the other hand, even reduced harm gives cause for great concern when that harm is spread ubiquitously. Youth usage of social media is vastly more widespread than smoking ever was: 95% of 13–17-year-olds use it, whereas in 1964, only 42% of adult men smoked.

Fast-forwarding from analog back to the digital problem at hand, observe the parallelism: Like Big Tobacco, Big Tech has continuously put out products that it knows are probably unfit for use by children and adolescents, but aggressively targets those demographics anyway. A Wall Street Journal investigation, “the Facebook files,” has confirmed that at least one social media firm—in fact, the biggest—knows far more than it admits about the various harms its products inflict. This is the key point: the products are causing harm, and the companies responsible know it, but foster user’s addictions anyway.

Bad social media content did not cause the global mental health crisis any more than tobacco, the crop, caused millions of preventable deaths. Big Tech committed active malfeasance, not passive nonfeasance, by wantonly building and marketing content-targeting systems which addict their users to an inundation of stimulating content—some of it good, some of it bad, much of it neither—that has the cumulative effect of making millions feel worse than they would without the product, but worst of all about the prospect of missing out.

Revising Section 230 Would (Lucky) Strike a Balance

The principal damage that Section 230 has done is not that it grants immunity to publishers for hosting third-party content, but that it has come to stand for an unsophisticated understanding of social media as a mere publisher of third-party content. The reasoning of the Gonzalez case echoes the red-herring concept of “content moderation,” wherein the platforms are understood as moderators of message boards, encumbered only by a limited duty to attempt to remove overtly harmful posts. But these platforms are not a massive bulletin board or the digital town square. They are a product: a product composed of billions of pieces of content. Assessing liability in a piecemeal fashion—by scrutinizing individual pieces of content—would be tantamount to building tobacco liability cases around individual carcinogens. The sounder legal theory is to bring a case alleging that (1) the product contains elements that are damaging to one’s health; (2) the companies knowingly package and deliver those poisons synthesized with an addictive substance; and (3) despite these risks, the product is knowingly marketed towards vulnerable populations with inadequate warnings.

Section 230 complicates that legal theory in two ways. First, the statute effectively declares that social media platforms are not responsible for hazardous materials contained within their products, which undercuts the first proposition of the argument. Second, on a more abstract level, Section 230 case law has badly mischaracterized the nature of social media. As argued above, this class of products needs to be ascribed with agency for information delivery; the application of the statutory language “publisher” to social media platforms equivocates user-targeting algorithms with passing along somebody else’s message. This naïve conception—of social media platforms as publishers—was visible in Gonzalez when the Supreme Court construed an altogether different statute. YouTube’s role in radicalizing terrorists on YouTube was not “doing nothing.” While YouTube didn’t make the videos, it hosted the content—which it fairly has Section 230 immunity for—but also promoted the content via recommendation algorithm. Not only did it suggest the content—it created a highly-susceptible audience by collecting lonely individuals and addicting them to a product which weans them onto radical content while sapping their motivation levels.

Recently, versions of the legal theory advanced herein have gained traction. On October 13th, a Los Angeles judge reached similar conclusions about the nature of social media, in spite of Section 230. On October 24th, 41 states and the District of Columbia brought action against Meta for “cultivating addiction to boost corporate profits.” The parallelism is evident: an emerging medical consensus appears to have spurred state Attorneys General into action amid Congressional idleness. However, unlike with cigarettes, social media platforms have been bequeathed with a powerful liability shield that could stifle the legal reckoning.

Congress can strike a balance between overbroad regulation, which carries the risk of quelling future innovation, and potentially obstructing lawsuits premised on social media harm. It can do this by making a clarification to Section 230: immunity for third-party content does not imbue the publisher with immunity for the design or effects of the product that contains that content. By clarifying this issue, Congress may allow litigation to solve the problem—just as it did in the case of Big Tobacco.

Bryce Bennett

GLTR Staff Member; Georgetown Law, J.D. expected 2025; London School of Economics, MSc Political Science and Political Economy 2022; Indiana University, B.S. in Business 2017.