This January, Facebook updated its content policy covering “deepfakes.” Deepfakes range from the benign, such as modifications of scenes in the movie Forrest Gump, to the nefarious, like the massive volume of artificially-created porn featuring female celebrities.
Facebook’s policy, which governs organic content, prohibits all media that has been edited in a way that “would likely mislead someone into thinking the subject of the video” said something the subject did not and is the product of artificial intelligence or machine learning “that merges, replaces or superimposes content onto a video, making it appear to be authentic.” In other words, media must not only be edited in a misleading way, but also through the use of artificial intelligence or machine learning tools. As The Verge noted, simply adjusting the speed of a video or omitting a portion of a subject’s speech do not meet the bar for removal as manipulated media under Facebook’s policy, though content of this nature could be subject to removal on other policy grounds.
The Verge also pointed out that Facebook’s new policy would not result in the removal of the video that spread on social media last summer appearing to depict an intoxicated, disoriented, or possibly ill Speaker of the House Nancy Pelosi. In that case, the video was not the product of sophisticated actors merging pieces of content using advanced tools; instead the video was created merely by slowing down a video of the Speaker at an event. As reports of the video surfaced, Facebook de-emphasized it in News Feed rankings, YouTube removed it altogether, and Twitter let it stay up and spread uninhibited. A link to the video was also tweeted by President Trump’s personal attorney Rudy Giuliani, though that tweet was later deleted.
Following the announcement, and amid numerous references to the video of Speaker Pelosi, Facebook has faced criticism from policymakers and academia for not going far enough in its approach to content removal under their policy. Wired points out that this policy doesn’t cover the bulk of manipulated media on Facebook, which is generally edited using much more elementary means. This includes slowing down videos, as in the Speaker Pelosi case, or omitting words from speech, such as a blatantly misleading video of 2020 presidential candidate Joe Biden that spread online earlier in January. In response, Facebook points to its fact-checking program as a solution to videos of this nature, which alert users that the content they’re viewing has been marked false by fact-checkers. Facebook also continues to insist that it prioritizes free expression in its approach to both ads and organic content removals, particularly regarding content from politicians and political campaigns, leading Facebook to allow content that is prohibited on other platforms.
With the 2020 election cycle ramping up and over eighty percent of Americans spending at least some time online every day, including consuming political news on social media, the policies adopted by online platforms will continue to be tested and scrutinized for the impact they have on the electoral process. Some advocates will continue to call for more oversight to mitigate the risk of voter manipulation through misleading content, while others will continue to push for a hands-off approach by platforms in the interest of free speech. In the absence of regulatory constraints on their choices, Facebook and other online platforms will have to continue charting this political minefield largely on their own, with the knowledge that any path they choose is rife with challenges and likely to leave at least one side of the aisle dissatisfied.