Amplify the Party, Suppress the Opposition: Social Media, Bots, and Electoral Fraud

Cite as: 4 GEO. L. TECH. REV. 447 (2020)

In the weeks preceding the 2016 United States presidential election several images began making the rounds on Twitter. The graphics sought to “remind” Democrats that they could vote via text message. According to The Wall Street Journal, they were built to resemble genuine “get-out-the-vote material produced by Hillary Clinton’s campaign.”1 Many included the “Paid for by Hillary for President 2016” disclaimer that appeared in Clinton’s actual social media advertising. Some were in Spanish, targeting Latinx voters, while others included a photo of an African American woman holding an “African Americans for Hillary” poster. However, no states actually allow people to vote over text.

Many such vote-via-text tweets and corresponding images that circulated during the 2016 U.S. elections were spread by anonymous accounts—which lacked information, imagery, or content that identified the person behind them. Some accounts appeared to be spammers, with product advertisements interspersed among political messages. Many featured the hallmarks of social bots, or automated social media accounts built to look like real users and spread content. These “political bots,” so designated because most of their content was geared towards the manipulation of public opinion during a pivotal political event, worked to amplify the disinformative voting messages.2 The logic behind these political bots is that if ten human-run accounts can spread many messages over Twitter, then one thousand automated profiles can spread masses.

Continue Reading

Samuel Woolley & Nicholas Monaco

Samuel Woolley, Assistant Professor, School of Journalism, University of Texas at Austin. Program director, propaganda research, Center for Media Engagement at UT. Ph.D., University of Washington.

Nicholas Monaco, Director, Digital Intelligence Lab at Institute for the Future. M.S., University of Washington.