Eric Pait

Content Conundrum: Combating False Narratives in the Age of Social Media

Social media websites, such as Facebook and Twitter, have faced growing pressure over the “fake news” phenomenon ever since the 2016 presidential election. Immediately after the election, concern was raised over several incidents that occurred during the campaign, such as the “Pizzagate” conspiracy, which culminated in a man opening fire in the Comet Ping Pong Pizzeria in Washington, D.C.1

This issue was not isolated to elections, however, as conspiracy theorists rapidly spread hoaxes on social media after tragedies. Following the shooting at the Route 91 Harvest music festival in Las Vegas in October 2017, victims and their families faced a flood of viral posts and videos claiming they were “crisis actors” hired to pose as victims.2 This crisis-actor hoax arose again recently, as similar viral content flooded social media following the February 2018 shooting at Marjory Stoneman Douglas High School in Parkland, Florida. But this time something changed: Facebook and YouTube vowed to crack down on the misinformation campaign.3 Despite this promise, policing content has proven easier said than done, as those spreading the misinformation rapidly adapt to circumvent moderation efforts by changing hashtags or avoiding certain keywords that are flagged by the companies’ algorithms.4

This struggle to police fake and misleading content raises the question of how social media companies can effectively combat the spread of these posts and videos. Reacting to the flaws in using artificial intelligence (AI) algorithms as the sole method for moderation, Facebook’s recent focus has been on its human content-moderation team—the company has hired 3,000 new employees in the last eight months, bringing the total to 7,500 human content-moderators.5 Unlike current AI systems, human efforts are able to bring a contextualized approach to content moderation; human moderators can tell the difference between a post that is simply discussing the misinformation campaign using the crisis actor hashtag from a post that is actually part of that misinformation campaign. Although these human moderators may be ideal for handling nuanced issues that require this type of contextualized analysis, they struggle to keep pace with the volume and dynamic nature of an organized misinformation campaign.6

As such, automoderation tools are still necessary for combatting viral hoaxes at scale. These tools enable a large volume of content to be reviewed quickly based on easily changed filtering guidelines, but they have often faced criticism as a result of their shortcomings. When employed for detecting nudity in the past, automoderation tools employed by Facebook have inadvertently removed otherwise innocuous content—in the case of the ACLU in 2013, a photograph of a statute with exposed breasts—with no readily available appeal process.7 London’s Metropolitan Police Service faced a similar issue when their digital forensics team discovered that their image recognition software would occasionally mistake photographs of the desert for pornographic imagery.8 As such, some have advocated a different approach to combating widespread misinformation campaigns: localizing content. Rather than focusing on making judgement calls on what information individuals see, social media websites could prioritize content from a user’s local community in the newsfeed, thereby making information easier for individuals to verify and reducing the reach of national campaigns, especially when a poster from another state—or another country—is trying to spread false information about local events.9

Even with social media companies taking the initiative to improve content moderation voluntarily, they are still facing potential pressures from regulators and legislatures. A group of Democratic senators introduced the Honest Ads Act, which would require web platforms to increase the transparency of their advertisements.10 Rather than waiting to see what comes of the Honest Ads Act in Congress, legislators at the state level have also begun to address the issue themselves. In Maryland and New York, lawmakers are working towards passing their own ad transparency requirements, similar to the Honest Ads Act.11 In California, lawmakers have set their sights on bots, pursuing legislation that would require companies to publicly label accounts that are known to be bots rather than human-operated.12 The European Union has also joined the fray, with the European Commission warning companies that if they do not make improvements in their efforts to remove “extremist material,” such as propaganda from the Islamic State and other terrorist groups, it will consider taking legislative action to force their hand.13 Although these legislative efforts are not currently focused on the types of viral misinformation campaigns previously discussed, they are indicative of the fact that state and national regulators are not afraid to seek legislative remedies if they are not satisfied with the voluntary efforts of private companies.

Although efforts to moderate false or misleading content on social media may seem to raise more questions than answers, it is an issue that will continue to be at the forefront of legal and policy debates as legislatures and regulatory bodies continue to wrestle with combating discrimination, extremism, and the infiltration of digital propaganda into local, state, and national elections.

GLTR Staff Member; Georgetown Law, J.D. expected 2019; University of North Carolina at Chapel Hill, B.A. 2014. ©2018, Eric Pait.