Images of horror from concentration camps finally moved us into action. Nixon on the phone cost him his presidency. The man in front of the tank at Tiananmen Square moved the world. “As a consequence of this, even truth will not be believed. This is the tip of the iceberg.” 7Īs Nasir Memon, a professor of computer science and engineering at New York University, puts it: “Nine months later, I’ve never seen anything like how fast they’re going. “In January 2019, deep fakes were buggy and flickery,” Farid told The Financial Times. “Why sniff out other people’s fantasy creations when you can design your own?” asks Hany Farid, a computer science professor and digital forensics expert at the University of California at Berkeley, “We are outgunned.” Farid says, “The number of people working on the video-synthesis side, as opposed to the detector side, is 100 to 1.” 6 As a result, the technology is improving at breakneck speed. In AI circles, reports The Washington Post’s Drew Harwell, identifying fake media has long received less attention, funding, and institutional support than creating it. The capacity to generate deepfakes is proceeding much faster than the ability to detect them. “The capacity to generate deepfakes is proceeding much faster than the ability to detect them.” 5 And if defensive methods yield results short of certainty, as many will, technology companies will be hesitant to label the likely misrepresentations as fakes. “A lie can go halfway around the world before the truth can get its shoes on,” warns David Doermann, the director of the Artificial Intelligence Institute at the University of Buffalo. Says Nick Dufour, one of Google’s leading research engineers, deepfakes “have allowed people to claim that video evidence that would otherwise be very compelling is a fake.” 4Įven if reliable modes of detecting deepfakes exist in the fall of 2020, they will operate more slowly than the generation of these fakes, allowing false representations to dominate the media landscape for days or even weeks. In 2020, however, campaign operatives will have technological grounds for challenging the authenticity of such revelations and competing testimony from attendees at private events could throw such disputes into confusion. The accuracy of these recordings was undisputed. And in 2016, Hillary Clinton dismissed many of Donald Trump’s supporters as a basket of deplorables. In 2012, Mitt Romney was recorded telling a group of funders that 47% of the population was happy to depend on the government for the basic necessities of life. In 2008, Barack Obama was recorded at a small gathering saying that residents of hard-hit areas often responded by clinging to guns and religion.
“If AI is reaching the point where it will be virtually impossible to detect audio and video representations of people saying things they never said …, seeing will no longer be believing.” Worse, candidates will be able to dismiss accurate but embarrassing representations of what they say are fakes, an evasion that will be hard to disprove. If AI is reaching the point where it will be virtually impossible to detect audio and video representations of people saying things they never said (and even doing things they never did), seeing will no longer be believing, and we will have to decide for ourselves-without reliable evidence-whom or what to believe.
Teasdale : But I saw you with my own eyes!Ĭhicolini : Well, who you gonna believe? Me or your own eyes?Īs the 2020 election looms, Chicolini has posed a question with which candidates and the American people will be forced to grapple. Teasdale (the redoubtable Margaret Dumont) : Your Excellency! I thought you’d left.Ĭhicolini (Chico Marx disguised as Freedonia’s president) : Oh, no, I no leave. To illustrate the challenge posed by this development, I note the warning offered by the unforgettable boudoir scene in Marx Brothers comedy classic, “Duck Soup.” 3 Because the Israeli researchers have released their model publicly-a move they justify as essential for defense against it-the proliferation of this cheap and easy deepfake technology appears inevitable. Unlike previous methods, this one works on any two people without extensive, iterated focus on their faces, cutting hours or even days from previous deepfake processes without the need for expensive hardware. In August 2019, a team of Israeli researchers announced a new technique for making deepfakes that creates realistic videos by substituting the face of one individual for another who is really speaking.
The longer run may come as early as later this year, in time for the presidential election.