Logos of various digital media companies.
Photo by Ibrahim.ID (CC BY-SA)

Digital Media Companies Respond to Political Disinformation Amidst Lacuna of American Law

The 2016 U.S. Presidential Election brought critical attention to the wave of political disinformation campaigns taking place on digital media platforms. Disinformation is the use of misleading, false, or irrational arguments to manipulate public opinion for political purposes and, left unchecked, can result in an electorate disconnected from fact and skeptical of the free press. Ahead of the 2020 U.S. Presidential Election, the challenge of differentiating between legitimate political discourse and disinformation on digital media platforms has largely fallen to the digital media companies themselves. Within the past two months, Facebook, YouTube, and Twitter released new policies regarding false or misleading media shared on their websites.

Digital media companies’ content moderation policies operate within a lacuna in American law. On the federal level, Congress has considered, but has not passed, a number of bills which would combat disinformation campaigns: the Honest Ads Act, the Digital Citizen and Media Literacy Act, and the Malicious Deep Fake Prohibition Act of 2018. Although Congress passed the Countering Foreign Propaganda and Disinformation Act (CFPDA), which was signed into law in 2016, the federal government has done little to truly counter disinformation campaigns. The CFPDA established the Global Engagement Center(GEC), housed within the State Department, and charged it with coordinating efforts to combat foreign state-sponsored propaganda and disinformation. However, the GEC has proven ineffective. A former GEC official criticized the organization for its “administrative incompetence,” and reports indicated that the State Department has withheld spending money designated for the program.  At the state level, California managed to pass comprehensive legislation on the issue, but overall the states’ efforts have been patchwork.

The lack of concrete legislation regarding political disinformation has left countermeasures by digital media companies with critical flaws. Critics have asserted that the content moderation policies implemented by the largest companies, sometimes referred to as the technology industry’s “content cartel,” lack both transparency and accountability. First, a company’s policy may be developed, at least in part, in conjunction with other companies’ policies, making it difficult to identify the origin of a particular policy standard. Compounding this problem, in the United States, where companies are not under governmental pressure to counter political disinformation according to clearly articulated standards, companies fail to consistently apply their policies and justify their content moderation (in)actions. In contrast, where jurisdictions have applied governmental pressure on political disinformation, companies more effectively self-regulate and provide greater transparency into their policy efficacy. For example, Britain recently announced that its media regulator, Ofcom, would begin to more intensely scrutinize the content moderation practices of digital media companies, including Facebook and YouTube, to guarantee the “protections, accountability and transparency people deserve.”

Digital media companies’ content moderation policies fall into a gap in American law, and that lack of governmental oversights raises two important concerns: companies may fail to both clearly articulate their policy standards and consistently apply their policies. These concerns were highlighted in early February when President Donald Trump released onto his social media pages a controversially edited video of House Speaker Nancy Pelosi tearing up his State of the Union address. Although it does not feature the advanced deception techniques found in deepfakes, the video sandwiched Speaker Pelosi tearing up the speech between unrelated moments in Mr. Trump’s speech. The result placed the event in a context that some have deemed doctored or deceptive.

Digital media companies largely refused to act on calls to remove the video because it fell within a gray area of political disinformation. For instance, Twitter allowed the video to be shared on its website in light of its policy standards that require removal of media deemed to be “synthetic or manipulated,” likely to impact public safety or cause harm, and shared in a deceptive manner. Despite the recent updates in digital media companies’ content moderation policies, political disinformation remains an urgent issue ahead of the upcoming U.S. elections. Governmental action requiring digital media companies to address political disinformation would provide for more consistent, transparent, and accountable content moderation.

Zev Beeber

GLTR Staff Member; Georgetown Law, J.D. expected 2021; University of Virginia, B.A. 2015; © 2020, Zev Beeber.