Bad Facts Make Bad Law: How Platform Censorship Has Failed So Far and How To Ensure that the Response to Neo-Nazis Doesn’t Make it Worse
From Cloudflare’s headline-making takedown of the Daily Stormer to YouTube’s summer restrictions on LGBTQ content, 2017 was a banner year for platform censorship. Companies—under pressure from lawmakers, shareholders, the press, and some members of the public—ramped up restrictions on speech by adding new rules, adjusting their still-hidden algorithms, and hiring more staff to moderate content. They banned ads from certain sources and removed offensive but legal content, based mostly on complaints by some users against others and algorithms tracking those complaints. Groups, both formal and informal, embarked upon campaigns to convince all platforms, from Facebook to Github to GoDaddy, to drop Neo-Nazi groups; just as other groups joined together to urge censorship in the other direction, with complaints about those who complained about Neo-Nazi and other hateful groups. Many pundits and traditional media voices have urged online platforms to even more aggressively and proactively police the speech of their users.
But the demands were not limited to just those pro and against Neo-Nazi groups. Censorship demands also impacted people of color and other marginalized groups far beyond that dispute, some as direct targets and others as apparent side effects. The Black Lives Matter Movement was even included in an FBI report which suggested “black identity extremists” were an emerging kind of terrorist, setting that group up for more takedowns by the platforms that host their speech.
The emotional energy behind this pressure is understandable. In 2017 we saw offline violence by neo-Nazis who partially organized online. We saw increasing political polarity around the world. We saw ongoing and often increasing harassment of women and people of color online. Those peddling hate seem to be gaining power in the United States and around the world. It is not surprising that these events sparked fierce public calls for action in the online space.
Cindy Cohn, Executive Director of the Electronic Frontier Foundation. I appreciate that the organizers of this academic collection and conversation included me, a nonprofit practitioner. This piece is adapted and expanded from several blog posts that my colleagues at the Electronic Frontier Foundation and I have written over the past few months. It also includes information from our co-sponsored website onlinecensorship.org, led by my colleague Jillian York. All good ideas probably came from others at EFF; all mistakes are my own. Also, big thanks to EFF Intern Sarah Glendon for her assistance.