From the Lab to the Streets: The Provenance of Deepfakes
Deepfakes have ushered in a new era of uncertainty in our global information system, but the technology is not new. The altered videos we see today represent a logical extension of older AI research. Indeed, it wasn’t long ago that we read about AI generating new paintings in the style of old masters such as Rembrandt. At that time, there was more concern expressed about the future of human creativity than there was about the effect on the human psyche and trust online. Would we still need artists? We then collectively shrugged because we were asking the wrong question.
If that’s the wrong question, what’s the right one? The problem with deepfakes is that its popularization collides with the rise of fake news. Fake news is itself nothing new. There have always been conspiracy theorists who are notoriously skeptical of “mainstream” media, but remain completely unskeptical of their own sources, whether they claim that Tibetans are spying on us through a system of underground tunnels or that vaccinations cause autism.
Going beyond misinformation, there is an intersection of an additional three factors which, when taken together, greatly magnifying the threat of deepfakes as a means of fraud: the democratization of AI, the decrease in the cost of computing power, and the phenomenon of virality.
Deepfakes jumped out of the lab and into the streets. You don’t need a Ph.D. to generate fake media, nor do you need the resources of a nation state to acquire enough computing power. Some easily available, open source, software tools and some time on an AWS cluster, are all you need. In some cases, it only takes an app: in China, a popular iPhone app lets you put your face into movie clips. Once you’ve created a fake, you can use social media to propagate it. YouTube’s and Facebook’s algorithms for optimizing “engagement” can make any content viral in seconds.
It is important to note that not all fakes are created equal. But from our preference of speed over accuracy, we’ve rallied around the term Deepfake to reference two types of altered content with very different implications.
Deepfakes are highly realistic fake videos, audio recordings, or photos that have typically been generated by deep learning AI. The algorithms behind this AI are fed large amounts of data, such as photos of a person’s face, or audio samples of their voice (think Alexa, Facebook, your Ring, the facial recognition software on your phone). From this data, the algorithm is able to generate synthetic audio of that person’s voice saying things they have never said, or videos of that person doing things they have never done. This can then be used to create fake videos that are highly realistic, such as the now infamous deepfake of Obama.
Shallowfakes are less sophisticated and more widespread. These are created using more traditional editing techniques that result in an altered message through misleading content or altered context. The videos created using the aforementioned Chinese iphone app Zao and the video of Nancy Pelosi slurring her speech are examples of Shallowfakes.
Terminology aside, understanding the application and threat of deep and shallow fakes is critical, as they will impact us in different ways.
Deepfakes might be more dangerous in computer security, where they can be used to circumvent authentication or perform high-quality phishing attacks, thus posing a greater threat to individuals and corporations. Symantec has reported that it has seen such attacks in the field, and recently an AI-generated voice that mimicked a CEO was used in a major financial fraud.
Shallowfakes, are better at shaping public opinion, or as agents of information warfare, and are likely to persist in the political and social spheres.
Each paint an equally scary picture. But right on the heels of the deepfake headlines disturbing your sleep are those reassuring you that advanced detection methods are underway. The news sources and companies you trust will halt this train of misinformation! They will protect you, and honor their obligation to represent what is real. You’ll have restored faith in what you see, hear and believe.
But what if the same algorithm that’s being used for detection is also being used to inform the deepfake AI, making it aware of its flaws so it can improve itself and become even better at impersonating reality? It’s not likely you’ve see quite as many headlines about that yet, but therein lies the real threat of deepfakes, and the topic of our next piece: Thanks for the Tip: The Flaws of Deepfake Detection.