Ben Sobel
Cornell Tech
A Real Account of Deep Fakes
Abstract
Laws regulating pornographic deepfakes are often characterized as protecting privacy or preventing defamation. But privacy and defamation laws paradigmatically regulate true or false assertions of fact about persons. Anti- deepfakes laws do not: the typical law bans even media that no reasonable observer could understand as factual. Instead of regulating statements of fact, anti-deepfakes laws ban outrageous depictions per se. This is a significant and unrecognized departure from the established dignitary torts, and it is important to acknowledge for two reasons. First, doing so helps interpret, and improve upon, statutes that misconceive the harm and the subject matter they regulate. Second, it helps mount the best defense of these laws. Because anti-deepfakes laws ban outrageous imagery irrespective of any factual assertions it makes, the rationales for defamation and almost all privacy doctrines do not justify the constitutionality of these statutes. Instead, the proper comparators are laws that forbid expression because it is offensive.
This Article surveys every enacted anti-deepfakes statute to distill the typical law. It then uses semiotic theory to explain how deepfakes differ from the media they mimic and why those differences matter legally. Photographs and video recordings record events. Deepfakes merely depict them. Grounds to regulate records are not necessarily grounds to regulate depictions. Many laws—covering everything from trademark dilution to flag burning to “morphed” child sexual abuse material (CSAM)—have banned offensive depictions per se. Several are in effect today. But when such bans are challenged, courts mischaracterize imagery to sidestep First Amendment scrutiny: courts pretend fictional depictions are factual records.
Past and present American laws forbid outrageous depictions per se, but courts often pretend that they don’t. Anti-deepfakes laws will force courts to consider whether a statute may ban offensive expression as such, even when jurists must admit that it does so. Anti-deepfakes statutes rise and fall not with laws that regulate statements of fact, but with laws that forbid offensive expression as such.
About
Ben Sobel is a lawyer, a scholar of information law, and a postdoctoral fellow at the Digital Life Initiative at Cornell Tech in New York City and affiliate scholar at the NYU Information Law Institute. His work examines how law and technology construct legal significance out of raw information. Ben is a graduate of Harvard College and Harvard Law School. He previously served as a law clerk to Chief Judge David Barron and Judge Michael Boudin of the United States Court of Appeals for the First Circuit, and to Judge Pierre Leval of the United States Court of Appeals for the Second Circuit. He has also served as a Lecturer on Law at Harvard Law School.