
Joe Rogan vs. Neil Young: A Tale of Misinformation, Content Moderation, and the First Amendment
“They can have Rogan or Young. Not both,” artist Neil Young stated in a letter he posted on his website and later deleted. Young asked Spotify to remove his music in protest of the website streaming Joe Rogan’s podcast. Spotify removed Young’s music on January 26 and released a statement saying, “We want all the world’s music and audio content to be available to Spotify users. With that comes great responsibility in balancing both safety for listeners and freedom for creators.”
In a subsequent letter, Young shared that his concern for the spread of Covid-19 misinformation on “The Joe Rogan Experience” (JRE) was sparked by an open letter published by hundreds of public health professionals, which stated that Episode #1757 of JRE, featuring Dr. Robert Malone, promoted falsehoods about the Covid-19 vaccine and called on Spotify to develop a policy to handle Covid-19 misinformation on its platform. Young’s January 26 letter also asked other creators to remove their content from Spotify. Joni Mitchell, Nils Lofgren, Graham Nash, India.Arie, Failure, Roxane Gay, and David Crosby and Stephen Stills, Young’s former band mates, have since announced that they would join Young in removing their music from Spotify.
Spotify’s CEO Daniel Ek responded with a letter of his own on January 30, sharing Spotify’s decision to publish its internal Platform Rules to help users understand how Spotify moderates content. The Platform Rules added a content advisory warning to podcast episodes containing a discussion about Covid-19 directing users to Spotify’s dedicated Covid-19 Hub. Ek said that seventy episodes of JRE were removed from Spotify upon the request of Rogan himself, but Episode #1757 remains on the platform with a Covid-19 content advisory warning.
The situation begs the question of how online content should be moderated. Several celebrities and hundreds of public health professionals used their influence to shine a light on the spread of misinformation on Spotify. Despite Spotify’s response, most of Rogan’s podcast remains on the platform and Rogan still has a $200 million, multi-year licensing deal. Should the onus be on users or celebrities to bring attention to content moderation problems online, should regulators step in to address the spread of online misinformation, or do online platforms have sole responsibility to moderate content effectively?
Online platforms can legally remove certain content because they are not subject to the First Amendment requirement that “Congress shall make no law . . . abridging the freedom of speech.” However, government efforts to require platforms to moderate health misinformation content are likely to violate the First Amendment. For example, Senator Amy Klobuchar introduced the Health Misinformation Act to the Senate, which would amend Section 230 of the Communications Decency to make online platforms liable for health misinformation posted by users during public health emergencies. Senator Klobuchar’s bill, along with any other law that effectively dictates content moderation policies, would likely be struck down as violating the First Amendment.
Though government efforts to dictate online content moderation policies are unlikely to succeed, governments can and should utilize social media to communicate with the public about the pandemic. Public health authorities should publish and promote understandable and credible information about the pandemic, provide transparent explanations for Covid-19 policy changes, consider the community’s diverse needs in online communications, and counter widespread misinformation with factual information. Over 75 million people have been infected with the virus and over 940,000 deaths are attributable to the disease in the United States as of March 1, 2022, so people must be aware of and follow public health guidelines to limit the spread of the virus.
Online platforms play an important role in shaping the dialogue surrounding the pandemic. Given that more than 80% of Americans access the news on digital platforms, online platforms must hold themselves to a higher standard and recognize that they have a responsibility to promote true and accurate information about the pandemic. In this case, it was unacceptable that Spotify failed to clarify its Covid-19 misinformation policy until Neil Young called them out. However, celebrity pressure on Spotify lead to tangible changes in the platform’s policies, even though not all the boycotters’ demands were met.
Online platforms are ultimately profit-seeking corporations who may lack incentives to establish effective content moderation policies on their own initiative. In the saga between Rogan and Young, it is no wonder that Spotify decided not to de-platform one of its most popular creators in the name of preventing misinformation. However, platforms will only elevate controversial and misinformative content if there is widespread demand for such content. In the future, members of the public should be more conscious of the online content they consume and pressure platforms to adopt thorough content moderation policies. Ultimately, online misinformation is a problem that will require a cooperative solution between the public sector, the government, and users.
Caroline Kraczon
Caroline Kraczon; GLTR Assistant Notes Editor; Georgetown Law, J.D. expected 2023; University of Georgia, A.B., M.P.A. 2020.