Renewed Focus on Social Media’s Role as Regulator of Speech
When the New York Post published an article highlighting controversial corruption claims against president-elect Joe Biden three weeks before the presidential election, social media giants were swift, albeit inconsistent, in limiting the article’s dissemination. Twitter took the most aggressive action by blocking all links to the NY Post article within hours of its publication, reasoning that it violated Twitter’s hacked material policy. For its part, Facebook opted for a more discreet approach, using its algorithm to reduce the story’s visibility in newsfeeds, citing its desire to limit the article’s propagation until factually validated by a third party. YouTube did not take any censorship action.
Only two days after its ban, Twitter reversed its measure amid mounting public criticism, allowing users to share the article on its platform. Although Facebook did not walk back its decision on the NY Post article, the world’s largest social media site has been far from consistent on related matters. On September 3, Mark Zuckerberg announced that Facebook would not make changes to its election-related policy until after the official results were final. But in October, Facebook nevertheless banned Holocaust denial content, QAnon conspiracy pages, and anti-vaccination ads before the election.
In many ways, the variability of responses displayed in this episode is emblematic of the broader controversy around social media behemoths and their outsized influence on the flow of information. During the 2016 election, Russia used Twitter and Facebook’s platforms to spread false and inflammatory content to influence the outcome of the U.S. presidential election. Four years later, social media companies, attempting to avoid the specter of further abuse and negative publicity, are still trying to get it right.
The censorship decisions surrounding the Post article faced immediate backlash from both sides of the political aisle. Republican members of the Senate Judiciary Committee called Twitter and Facebook’s decisions an “unprecedented election interference attempt.” Left-leaning journalists expressed concerns that tech giants overstepped by blocking millions of users from access to election information published by the country’s oldest and largest newspaper.
Social media platforms now face renewed scrutiny of their role in content moderation and calls for additional regulation. In September, the Department of Justice filed a lawsuit against Google for alleged violation of antitrust laws. In a move that would fundamentally alter the entire landscape of Silicon Valley, Justice Clarence Thomas has suggested that a revision of Section 230 of the Communications Decency Act may be warranted. And most recently, the Senate Judiciary Committee subjected Jack Dorsey and Mark Zuckerberg to intense questioning about speech moderation in the period leading up to the 2020 election.
Section 230 Amendments on the Way?
Among the topics addressed by the Senate Judiciary Committee were possible amendments to Section 230 of the Communications Decency Act. Enacted in 1996, Section 230 insulates “interactive computer service” providers from liability for content, treating them as a distributor and not a publisher. Distributors merely dispatch information to a wider audience, but do not have the editorial functions of publishers. The statute, however, effectively empowers social media sites to self-regulate its content moderation and removal on the platforms. At the outset, most social media platforms employed a reverse-chronological feed, wherein the user sees the latest post first. However, these companies eventually progressed to an algorithm-based feed that pushes certain “news” in anticipation of the user’s preference. Although platforms recently made efforts to return to a more “organic” feed, the process that directs the dissemination of information ultimately remains opaque.
Although policies to actively combat misinformation purport to protect election integrity, the selective censorship by private companies has triggered alarms bells and led some to question whether revision to Section 230 is overdue. The platforms exercised discretion in banning questionable content like the Post article on Biden. Although Twitter cited its anti-hacking policy for that decision, Twitter’s failure to censor President Trump’s hacked tax returns suggests that the policy is not applied evenly. Critics argue that social media companies censor in a discriminatory fashion and should be held to the same accountability as publishers.
Overall, the latest social media censorship effort did not achieve its desired goal of limiting the spread of potentially false information; instead, the move to block the New York Post article may have made more people curious about the controversial content. Nevertheless, there may be an unintended benefit: by coming out early and aggressively to say that there is something suspicious about the story, the social media companies shifted part of the conversation away from the article’s allegations and towards journalistic integrity, platform governance, and fact-checking. Although it is unclear whether the story or the censorship affected the election outcome, we can hope that the renewed focus on the role of social media giants in content regulation will yield greater transparency and provide the public more reliable information to guide decisions on the future of our country.
May Yang, GLTR Articles Staff Editor; Georgetown Law, J.D., expected 2021; University Southern California, B.S. in Business Administration 2015