Automated Content Moderation
The Internet has become an indispensable part of people’s lives in the 21st century. People generate a staggering amount of data across major social media platforms, which is visualized in figure 1.1 Platforms like Facebook, YouTube, and Twitter allow billions of users to share their content freely. Not surprisingly, moderating the content and removing unwanted content, such as hate speech and graphic violence due to strong demands from both the government and society, is essential for these platforms. 2 James Grimmelmann, the Tessler Family Professor of Digital and Information Law at Cornell Tech and Cornell Law School, broadly defines content moderation as “the governance mechanisms that structure participation in a community to facilitate cooperation and prevent abuse.”3 Nowadays, global platforms like Twitter and Facebook rely heavily on automated tools to curate the information generated by users across the globe. This technology explainer provides a brief overview of basic content moderation frameworks, followed by an introduction of automated content moderation technologies, and a summary of their advantages and disadvantages.
Georgetown University Law Center, J.D. Candidate 2022; Claremont McKenna College, B.A. in Economics 2017 & B.A. in Government in 2017