June, 2020

Deepfakes are Increasingly
More Common, and Sophisticated

Deepfakes are created through readily available programs and can be created by governments, groups, or individuals. Deepfakes were made possible by the 2014 invention of generative adversarial networks (GANs) by Ian Goodfellow, while then completing his PhD studies at the University of Montreal. GANs provide a deep learning method for an AI neural network not just to organize existing content, but also to create content.

The technology is readily available globally for free download.

Deepfake creations started appearing on the Internet in 2019, and the number has been increasing rapidly; for example, in early 2019, it is reported that there were 7,964 deepfake videos on the Internet. Nine months later, there were 14,678, almost all of which (96%) were pornography-related and involving celebrities or personal contacts of the video-creators.

However, political videos are becoming more common and, with the advancing sophistication of the fakes, distrust of real videos is also becoming more common. For example, in late 2018, in the country of Gabon, there were rumors that the president, Ali Bongo, was either very ill or deceased, as he had not been seen publicly for a period of time. A video was released on New Year’s Day 2019 to dispel the public’s fears. But the video was challenged as being fake, unrest arose, and within days there was an attempted, but unsuccessful, military coup. President Bongo remained in power, but it was never determined by experts whether the New Year’s Day video was real or fake.

U.S. Senator Marco Rubio has warned of the potential danger of deepfakes, saying that to threaten the United States now one does not need large weapons, but only “the ability to produce a very realistic fake video that could undermine our elections, that could throw our country into tremendous crisis internally .…” Some have cautioned that “Russian disinformation in 2020 could take the form of a ‘fake video of a white police officer shouting racial slurs or a Black Lives Matter activist calling for violence.’” In 2019, during a House Intelligence Committee hearing investigating the threat from AI, Chair Adam Schiff warned that “now is the time for social media companies to put in place policies to protect users from misinformation, not in 2021 after viral deepfakes have polluted the 2020 elections.”

One professor from USC, Hao Li, was quoted as saying, “People are already using the fact that deepfakes exist to discredit genuine video evidence .… Even though there’s footage of you doing or saying something, you can say it was a deepfake and it’s very hard to prove otherwise.” Aviv Ovadya, a researcher, was quoted in a recent article as saying, “It’s too much effort to figure out what’s real and what’s not, so you’re more willing to just go with whatever your previous affiliations are.” Ms. Ovadya called that “reality apathy.” Subbarao Kambhampati, a professor of computer science at Arizona State University, was quoted as saying, “In the longer run, I think it will be impossible to distinguish between the real pictures and the fake pictures.”

California enacted a law criminalizing the distribution of deepfakes 60 days before an election, but there are questions whether such a law is constitutional or an illegal infringement on First Amendment protected rights. Some experts consider that the best protections against deepfakes will come from the large-media platforms, like Google, Facebook, and Twitter. One way to facilitate greater compliance from those companies might be to amend Section 230 of the Communications Decency Act, which otherwise provides civil immunity to the media-companies for Internet content posted by third parties.

Sources:  William A. Galston, “Is seeing still believing? The deepfake challenge to truth in politics,” brookings.edu, January 8, 2020:
Ron Toews, ”Deepfakes Are Going To Wreak Havoc On Society. We Are Not Prepared,” forbes.com, May 26, 2020:
Related: Criminal Defense Newsletter, Volume 42, issue 11, August, 2019, p. 8:

by Neil Leithauser
Associate Editor