Deepfake videos, made using machine learning technology, have the potential to increase the efficacy of disinformation campaigns. Deepfakes can be broadly classified into two categories. The first category, which includes Deepfake pornography, has a harmful effect regardless of the number of individuals who view the videos. The second category, and the one addressed by this Note, requires broad dissemination to have a harmful impact. Three specific Deepfake threats are analyzed: (1) election interference; (2) economic interference; and (3) public safety. Existing legal mechanisms are insufficient to address the threat because Section 230 of the Communication Decency Act shields social media companies from liability for Deepfakes disseminated on their platforms. Even proposed amendments to Section 230 do not adequately address the Deepfake threat. This Note will argue that Deepfake videos are not protected speech under the First Amendment, and social media companies can be considered publishers of Deepfakes if the videos are not quickly removed. Section 230 should thus be amended to create a narrow exception for Deepfake videos. Regulatory agencies should then enact new rules and amend existing ones in order to hold social media companies liable for the circulation of Deepfakes. The threat of liability will deter social media companies from allowing the videos to spread unchecked on their platforms and incentivize them to develop new technology for prompt detection and removal.
a I would like to thank Professor Jason Mazzone for advising me as I wrote this Note. He challenged me to find creative solutions to the problems I encountered in my early drafts, and his insights and feedback were invaluable.
The full text of this Note is available to download as a PDF.