Comparing “Deepfake” Regulatory Regimes in the United States, the European Union, and China
With increasing regulatory attention on digital technologies, it has become clear that the United States, the European Union, and China are all, concurrently and in parallel, developing rules and frameworks on how best to govern emerging technologies and set standards of acceptable technological use. Despite the seemingly borderless nature of technology, all three jurisdictions have sought to propose and implement a flurry of laws and regulations to manage digital technologies, as well as to limit the uneasy power held by technology companies. In recent months, the specific technology of “deepfakes” have risen to the forefront of regulatory scrutiny.1
“Deepfakes” can be defined as the use of AI techniques (such as machine and deep learning and, more specifically, Generative Adversarial Networks) to generate synthetic but exceedingly realistic video and audio media, especially of human facial and vocal likeness.2 Deepfakes are used infamously in pornography, particularly nonconsensual and revenge porn, but also increasingly in political situations as well as for entertainment and educational purposes.3 The technology is increasingly available for experimentation by anyone with a modicum of technical ability. Although truly high-quality deepfakes tend to require a large repository of images and/or videos, even low-quality deepfakes have the capacity to be very harmful. While there are notable positive uses,4the negative impacts are particularly significant and highly visible, including emotional harm, identity theft, intimidation and harassment, reputational damage, political manipulation, and the undermining of trust.5 There are also potential gray areas: for example, the use of deepfakes of celebrities in advertisements and training videos, with or without the celebrities’ permission, has drawn scrutiny for ethical reasons yet it has also drawn interest by media and marketing firms for the ability to increase production at lower costs.6
Fears and concerns about deepfakes are driven by the potential for substantial harm, especially when they can be used to manipulate people into believing an individual said or did something that they did not. The goal can be to embarrass and silence critics; for example, the Indian investigative journalist Rana Ayyub was the subject of deepfake porn videos in efforts to discredit her work.7 The technology is also a problem in political and electoral contexts. Recent months have seen the distribution of an altered video of Nancy Pelosi where she appears drunk and incompetent,8 and a deepfake video where Ukrainian president Volodymyr Zelensky asks Ukrainian troops to surrender.9 While these were quickly discredited, the use of such technology in a political or conflict situation can be deeply unsettling.10 More importantly, such uses of deepfake technology create what Danielle Citron and Robert Chesney call the Liar’s Dividend, where the proliferation of deepfakes and other false information makes it “easier for liars to deny the truth.”11 The erosion of trust in society and political institutions is alarming.
Fellow, Georgetown Law Center on National Security; Master of Law and Technology, Georgetown University Law Center; M.A., International Economics and International Relations, Johns Hopkins University – Paul H. Nitze School of Advance International Studies. She also has over a decade of professional experience in the private sector advising business and technology leaders on aspects of technology strategy.