Taking Fakeness to New Depths with AI Alterations: The Dangers of Deepfake Videos

Author: Avani Desai, President at Schellman & Company, LLC
Date Published: 25 October 2021

“Deepfakes are becoming one of the biggest threats to global stability, in terms of fake news as well as serious cyber risks.” –Joseph Carson, Chief Security Scientist at Thycotic

They’re faking it … and they’re getting better by the day
Even if you haven’t heard the term before, chances are you’ve come across dozens of deepfake videos online and maybe didn’t know it. Deepfakes—derived from “deep learning” (a subset of AI) and “fake”—are media that have been forged or manipulated with the help of artificial intelligence; most often, existing images or videos are combined or superimposed onto source material using machine learning techniques. They’ve been around for a while; back in 1997, the Video Rewrite Program was the first to modify existing video footage to depict a person mouthing different words than they originally spoke. Since then, deepfake development ignited within academic institutions, online communities, and in the media and entertainment industries.

The technology that makes these falsified videos is only getting better. While producers once needed hundreds of images to overwrite one face with another, modern-day apps like FaceApp and Zao demonstrate how easy it is to make a very convincing deepfake video with a single image—all in a matter of seconds.

As with practically any type of technology, the nature of a tool depends on the ambition and character of its wielder. Deepfakes can either be used for harmless entertainment purposes or to mislead, deceive and harm. Deepfakes have provided some amusing content, including the YouTube video of Saturday Night Live’s Bill Hader morphing into Arnold Schwarzenegger and Al Pacino, or those that plug Nicolas Cage’s face everywhere. Agency GS&P created some interactive footage at The Dali Museum in Florida to “resurrect” Salvador Dali, with the tagline of their “Dali Lives” presentation being “Art Meets Artificial Intelligence.”

“A machine designed to create realistic fakes is a perfect weapon for purveyors of fake news who want to influence everything from stock prices to elections … AI tools are already being used to put pictures of other people’s faces on the bodies of porn stars and put words in the mouths of politicians.” -Martin Giles, San Francisco Bureau Chief of the MIT Technology Review

Delving into the dark depths of deepfake
There’s another side to the coin, however, and it’s anything but pretty. Deepfakes have elicited growing concern for their usage in celebrity pornographic videos, revenge pornography, hoaxes, financial fraud and fake news. While using Facebook’s app to share images of yourself with bunny ears might be cute, fake videos that realistically depict insulting or misleading images of prominent figures—Barack Obama, Mark Zuckerberg and Nancy Pelosi for example—can stir up controversy, spread disinformation and influence political opinion in societies already hanging by a thread.

The dangers are twofold: creating such fabricated videos has become increasingly easy (and extremely accessible), and the digital representations themselves are becoming increasingly sophisticated and harder to discern. Top deepfake artist Hao Li—whose portfolio includes Furious 7 and the iPhone X—believes that in the next six months, deepfake videos will be completely undetectable.

Not surprisingly, the world is having trust issues. Fraudsters used deepfake audio to impersonate a CEO’s voice and fooled his company into wiring US$243,000 to what they thought was the parent company in Germany. A suspicious video of the long-missing Gabon president supposedly delivering a New Year’s address accelerated an unsuccessful military coup in an already unstable country. While the subsequent forensic analysis didn’t find manipulations in the video, the mere idea of AI-synthesized media was enough to increase tension, fear, and bloodshed in Gabon. Simulating someone’s image or voice can help spammers obtain personal information and gain clearance into high-security areas.

Sophisticated technology, paired with dangerous intentions, poses an unprecedented cybersecurity risk by blurring the lines between what’s real and fake. As Deeptrace Labs reporter Henry Ajder explains: “Deepfakes do pose a risk to politics in terms of fake media appearing to be real, but right now the more tangible threat is how the idea of deepfakes can be invoked to make the real appear fake. The hype and rather sensational coverage speculating on deepfakes’ political impact has overshadowed the real cases where deepfakes have had an impact.”

Apart from the dangers and damage posed by the deepfake materials themselves—financial losses, tarnished reputations, credibility concerns, and so forth—there’s also a growing concern for how misinformation changes people’s perception of reality. “[Deepfakes] have been getting better at fooling the public into thinking they are real,” states Kathryn Harrison, the founder of DeepTrust Alliance. “Public deceptions weaken society’s ability to govern itself. Our objectivity could be altered by incorrect information,” leading to “significant consequences for an open and democratic society […] especially as we see these threats spilling over from media and politics into banking, insurance, and telecom.” Sam Gregory, the program director of the humanitarian nonprofit Witness, argues that undermining trust in the media has deep repercussions, particularly in politically fragile environments: “It gives another weapon to the powerful to say ‘It’s a deepfake’ about anything that people who are out of power are trying to use to show corruption.”

“The reality-warping, truly indistinguishable deep fakes aren’t here on a large scale yet, but they are coming and at the moment we are not prepared for them.” –Henry Ajder, Head of Communications and Research at Deeptrace Labs

Protection against deepfake
Despite the financial, social, political, and ethical dangers, there’s hope on the horizon—but retaining or regaining trust and transparency in our digitalized world won’t come easy. Researchers and tech giants are focusing on developing tools for exposing fakes, but malicious attackers are getting savvier. Some US states have enacted legislation to criminalize fake media used to deceive, defraud, or destabilize the public, and Congress is attempting to do so as well.

Ian Goodfellow, the inventor of GANs (a groundbreaking innovation in machine learning), hopes we’ll make significant advances in security to catch up to the bad guys, but he emphasizes the need for societal solutions, like teaching kids to hone their critical thinking skills. He describes how debate classes could teach them “how to craft misleading claims or how to craft correct claims that are very persuasive.” Kathryn Harrison also warns that a purely academic or commercial approach won’t solve the threats posed by deepfakes. “There’s no silver bullet for overcoming misinformation,” she says. “The only successful approach will be a combination of technical and societal solutions that tackle where the technology comes from, how it is made, and how it is received and distributed by human beings.”

Experts agree that we all need to practice greater media literacy. Given that deepfakes, like photoshopping, are here to stay, we need to be aware of the danger in order to look out for it. Disinformation expert Aviv Ovadya, head of the Thoughtful Technology Project, cautions against instilling overall skepticism, pointing out that we need a “social, educational, inoculating infrastructure” for neutralizing the impact of deepfakes.

Editor’s note: Learn more about artificial intelligence fundamentals as part of ISACA’s Certified in Emerging Technology (CET) credential.