A Horrifying Online Trend Is Getting Out of Control, and Here's Everything You Need to Know About It
ILLUSTRATOR Roland Mae Tanglao
Imagine this: You click on a video clip and see your girlfriend in a porn video. You sit there, slack-jawed, thinking how this didn't really happen. The actress in the video isn’t her, but there’s no denying it: That’s her face digitally superimposed on someone else's body. The actress' brows furrow like hers. Her voice has the same timbre. All of the tiny nuances that make your girlfriend unique have been virtually replicated to create this one malicious video aimed at smearing her reputation.
Welcome to the terrifying world of advanced AI technology. Unfortunately, your girlfriend is the victim of a deepfake—a video altered or created to make it seem like a person is doing things they never said or did. In a nutshell, deepfakes are the equivalent of fake photos created with Photoshop, but in video format.
The technology has risen in popularity among pornographic circles where tech-savvy hobbyists edit celebrities’ faces into porn videos with frightening accuracy. Celebrities like Emma Watson, Gal Gadot, Taylor Swift, Katy Perry, and Scarlett Johansson are in the extensive list of those victimized by the phenomenon, but you don’t have to be a celebrity to be targeted.
Today, the technology has been used for more than just porn and in a number of nefarious purposes. From putting words into people’s mouths to creating phony evidence in court, there seems to be no stopping this phenomenon. But will the technology ever blow up in the Philippines? It’s only a matter of when.
What is deepfake and how is it made?
The term deepfake is derived from the “deep learning” technology used to create it. Once the bread and butter of experienced special effects studios, today anyone with a computer can download deepfake software to create their own videos.
To create a convincing fake, a deepfake software uses generative adversarial networks (GAN) that pit two machine learning models against one other. One model creates video forgeries while the other tries to detect the forgeries. The latter model continues to create video fakes until the former can no longer detect them. The final result is a seamless, utterly convincing video that’s both uncanny and terrifying.
But how does the software know how you look and talk like? With just one image, a trained GAN can create a passable video of a person without the need for 3D scanning. Just imagine what it can do with a larger set of training data (i.e. your Facebook photos, YouTube videos, and everywhere you’ve left your mark in social media). A select sample of your voice is all that is needed to manufacture conversations that never happened at all.
The video below of comedian Bill Hader shows the frightening potential of deepfake technology. Watch as his face seamlessly morphs into the uncanny likenesses of Tom Cruise and Seth Rogen. How will we ever trust again?
How do deepfakes work and why do they have disastrous potential?
With today’s advancements, deepfakes have become ridiculously easy to make. Chinese app Zao, which allows you to superimpose faces over movie characters, has made the technology accessible to the average person. Owned by U.S. Nasdaq-listed MoMo Inc., the company has since triggered a privacy row. DeepFaceLab, a more sophisticated tool that utilizes machine learning to replace faces in videos, is just as easy to download online.
Considering its creative potential and uses for entertainment is one thing, but one cannot deny the dangers it poses to democracy. Critiques of deepfakes are mostly politically related, and rightfully so. A well-timed deepfake could be used as propaganda to impact voting behavior, injure a political opponent, cause violence in a country amid civil unrest, or create a wider political schism in an already divided society. Several deepfakes have already been made to misrepresent politicians in YouTube and forums (e.g. the face of Argentine President Mauricio Macri was replaced with Adolf Hitler's, Angela Merkel's replaced with Donald Trump's).
One can only imagine the disastrous consequences of the technology blowing up in the Philippines. Bloomberg Tech reporter Jeremy Kahn fittingly dubbed deepfakes as “fake news on steroids,” and in a society where netizens are easily swayed by fake online content and fact-checking is of secondary importance, deepfakes can be weaponized to further divide the Filipino electorate.
How do you outsmart deepfakes?
Here’s the thing: It’s becoming increasingly difficult to initiate a crackdown on deepfakes. One solution is to demand that online platforms develop techniques to prevent the spread of deepfakes, but this can lead to an array of constitutional problems once the campaign backfires and threatens free speech.
Currently, U.S. government institutions like DARPA and several U.S. colleges are working harder than ever to research deepfake technology and how to combat it. By feeding computer algorithms real and deepfake videos, they hope to create software that could help weed out fabricated ones. But there’s a glaring problem: We’re pitting technology against technology in a dangerous arms race that might never end.
While a solution is nowhere in sight, there’s one thing we can all do to fight the spread of deepfakes: Never take anything at face value. Be ruthless in fact-checking and pay special attention to sources. Seeing isn’t always believing.