Meet the new ‘verse, same as the old ‘verse: Moderating the “metaverse”

In late October 2021, Facebook’s rebranding to Meta signaled Mark Zuckerberg’s intent to focus on the “metaverse” as the future of the social media giant. But an old problem still looms large for Meta: content moderation. There are no good solutions to moderating the “metaverse,” as being too light on moderation will drive away users and possibly lead to legal issues for the company, while heavily moderating is a practical impossibility. All of which means that Meta’s $10 billion a year investment may be doomed from the start.

What is the “metaverse” anyway? Trying to answer this question today is akin to trying to define the internet in the 1970s. Technology today is not far enough along for us to be certain of what the future might hold. For our purposes though, let’s define it as the vision Zuckerberg and Meta have presented to us so far in their own advertisements: a combination of virtual reality (VR) worlds and augmented reality (AR) that consumers can use to dynamically communicate with each other, share virtual art, or even just play a game of poker.

Content moderation has been a major problem for Meta for years, with widespread reporting and controversy on hate speech and misinformation rampant on Facebook, as well as the dire toll that content moderation has on the content moderators themselves. And despite the billions of dollars Facebook has spent, it is still a nearly impossible task to moderate the millions of pieces of content that come through every day.

Given Meta’s struggles with effectively moderating content on their existing platforms, it is unlikely that they will be able to manage moderating the metaverse, and if that happens, Zuckerberg’s bet on the Metaverse will fail. The metaverse is even more difficult to moderate than Meta’s existing platforms because it takes the existing content moderation problems and amplifies them even further. In a VR/AR world, a content moderator would not only have to oversee the content that people post, but their behavior as well. That means monitoring and moderating what people say and do, which effectively takes the already difficult problem of content moderation and turns it up to 11.

Taking a light though approach to moderation is not a viable option. For one, the harmful effects of abusive behavior are amplified in VR settings, as users can physiologically and psychologically react to VR experiences as if it were happening to them in real-life. There have already been numerous reports of virtual groping, racism, and other abusive behaviors in Meta-owned VR games and platforms. If abusive behavior is allowed to become commonplace in the VR world, users will not want to be a part of that world. Meta’s Chief Technical Officer, Andrew Bosworth, has stated that toxic environments, particularly for women and minorities, could be an “existential threat” to any metaverse plans, while at the same time acknowledging that creating safe VR worlds may be “practically impossible” at scale. Taking too light of an approach to moderation could also lead to legal issues under the EU Digital Services Act, which will require social media companies to remove harmful and illegal content when such content is flagged. Similar requirements have been discussed as part of ongoing discussions around Section 230 reform in the US, although at the moment reform talks have stalled. Even if the status quo remains in the US, the EU’s changes will require Meta to take a stronger approach to moderation.

However, heavily moderating the metaverse also will not work, especially as a practical matter. While Meta might like to rely on AI to filter VR/AR interactions, doing so would require every second of every interaction to be monitored and analyzed. Putting aside the unsettling privacy concerns inherent in such monitoring, it would require untold amounts of computing power, making it practically impossible. As such, Meta would be relying on a post-hoc reporting model, where users must report other users for abusive behavior. However, this will not be effective in the metaverse. Even if the abuser is reported and banned, for the victim, the damage has already been done. While Meta might be fine legally under the EU Digital Services Act relying on such a model, users will still be harmed, and will be turned off from the Metaverse as a result. Additionally, being too aggressive with moderation risks chilling free speech and accusations of censorship. In the US, Meta would be unlikely to face any legal problems on free speech for moderating too heavily, as the First Amendment only applies to government actions. However, the EU has voiced concerns in the past about social media platforms controlling public discourse, so this could be a future concern with heavy-handed moderation.

Over recent years, social media companies have found difficult to find the right balance in moderation. The metaverse will present a moderation challenge far beyond what current social media presents, and there are not good answers. Going too light will lead to toxic environments and potential legal issues, while going heavy is a practical impossibility. Wherever Meta lands on the moderation spectrum, they would have been better served not going to the metaverse at all.

Ryan Hsu

GLTR Staff Member; Georgetown Law, J.D. expected 2022; University of Virginia, B.A. 2015.