Security Boulevard (Original)

Your CEO Isn’t Real: How to Deal With Deep Fakes

The history of deep fake technology is surprisingly long. Researchers at academic institutions have been developing deep fake tech since the early 1990s. The idea is even older, as popular science fiction—like the 1987 film The Running Man—can attest. But deep fakes are no longer relegated to the realm of sci-fi; they are, in fact, more present in our daily lives than you might realize.

Deep Fakes are a Serious Threat

It’s easy to think of deep fakes as some sort of advanced CGI used to create highly realistic animated films or to replace established actors in a film or television series, especially in cases where actors pass away unexpectedly before filming is complete. That is certainly one possible use. But as even a quick look at the deep fake Wikipedia entry will show, the reality is much more nefarious.

As much as we would like to think that art, acting and memes are the main purposes of deep fakes, it’s difficult to overlook things like blackmail and sockpuppets. These bring into question the potential impact of deep fakes in politics and across social media.

Deep fakes are so effective that they challenge the notion that “seeing (or hearing, or reading) is believing.” At Black Hat USA 2021, Matthew Canham (CEO of cybersecurity consultancy Beyond Layer 7) presented a talk on how deep fakes could be used to facilitate social engineering attacks and fraud. Some of the examples used are scams that we’ve already seen play out in the real world. Matthew hilariously used the example of the “I’m not a cat” lawyer to explain the possible future of deep fake scams and to demonstrate the reality: We already have the technology and this future is inevitable. So, what, exactly can we expect to see going forward, and what can we do about it?

The Past and Future of Deep Fakes

We’ve already witnessed deep fake former U.S. president Barak Obama make a public service announcement, deep fake Mark Zuckerberg discuss online privacy and several actors appear in … compromising pictures and videos that never actually happened. These fooled a lot of people.

As deep fake technology improves, it will become even faster and more advanced. The ultimate deep fake will be a live video backed by an AI bot that can interact with participants (read: Victims) in real-time. Similar bots can already interact in chat applications and in email threads; responding to statements and inquiries in a manner consistent with the individual they are impersonating. Combine that with, say, a real-time video in a Zoom chat, and the worst outcome won’t just be a coworker accidentally leaving the cat filter on. Instead, there could be a deep fake version of your CEO or another trusted person asking for account credentials, money or other potentially damaging information.

Tomorrow’s Deep Fakes, Today

Let’s revisit the video of president Obama for a moment. This turned out to be a rendering of the former president based on a video of comedian and actor Jordan Peele. FakeApp was used to process and refine the footage to make it believable. High-profile politicians and other public figures are prime targets for this type of attack, as there is often ample footage available to train the backend AI on.

With improving technology, a realistic use for deepfakes today is similar to the cat-lawyer. It’s already possible to generate renderings of particularly vocal and media-friendly CEOs—someone like Elon Musk or Dan Price—and a “live” video could feature an actor playing the visual role, while their voice is replaced with the one you’re familiar with.

Stay Safe out There, Kids

Deep fake technology is here to stay, and it’s only going to get harder to discern from actual, legitimate media. That doesn’t mean that we are doomed to fall for deep fake scams. With preparation and vigilance, it will still be possible to identify and avoid even the most compelling, realistic tricks.

Today, the simplest solution is to build processes around the assumption that your organization will become the victim of a deep fake attack. Some steps you can take to reduce the severity of impact include requiring passwords for entry into video calls—an additional barrier to keep unauthorized accounts from joining meetings—and setting up secondary communication channels for verification of particularly sensitive requests, such as those related to finances or granting new user permissions. If the initial request comes through a video chat or email, for example, secondary verification can be done by phone—just be sure you don’t give any information to the requester, not even the phone number to dial, until you confirm the request is legit.

Technology also can be used to help prevent deep fake attacks, as well. In addition to the password requirement in communication applications, using exploit prevention software can stop an attacker from working around the intended function of the software. Another consideration is authenticity verification. This is where technologies like blockchain (and other forms of sender verification like SPF, DKIM and DMARC for emails) come into play. If we can verify that the video, email, or other form of communication we received is authentic and remains unchanged from what was originally sent, then the chances of falling for—or even receiving—a deep fake scam are significantly reduced.

As deep fake technology advances, security professionals need to continue looking for new and innovative ways to prevent these attacks before a potential victim ever has the chance to see them.

Trust, But Verify

Deep fake technology may be nearly 30 years old, but we are only starting to see its true potential. The applications are both exciting and terrifying, but that doesn’t mean we need to live our lives in fear.

If we are aware of the possible uses of this technology, both for good and for evil, we can more easily combat its misuse. Now is the time to begin building strategies that will help to circumvent any potential attacks while also keeping an eye out for existing and emerging technologies that may help prevent such attacks in the first place.

Stay observant, trust your instincts and, above all, verify absolutely everything. Verification will be the most effective means of avoiding deep fake scams as they become more prevalent in the coming years.

Topher Tebow

Topher Tebow is a cybersecurity analyst, with a focus on malware tracking and analysis. Topher spent nearly a decade combating web-based malware before moving into endpoint protection. Topher has written technical content for several companies, covering topics from security trends and best practices, to analysis of malware and vulnerabilities. In addition to being published in Infosec Island, Topher has contributed to articles by several leading publications.

Recent Posts

Understanding Cybersecurity Vulnerabilities

What is a cybersecurity vulnerability, how do they happen, and what can organizations do to avoid falling victim? Among the…

2 hours ago

Bridging the Gap: Uniting Development and AppSec

We recently hosted a webinar on integrating development and security functions to increase organizational resilience. Industry leaders from Repsol, SAP,…

3 hours ago

USENIX Security ’23 – Union Under Duress: Understanding Hazards of Duplicate Resource Mismediation in Android Software Supply Chain

Authors/Presenters: Xueqiang Wang, Yifan Zhang, XiaoFeng Wang, Yan Jia, Luyi Xing Many thanks to USENIX for publishing their outstanding USENIX…

4 hours ago

Agile by Design: Cybersecurity at the Heart of Transformation

Unlock the dynamic interplay between cybersecurity and agility in today’s business landscape. Explore how organizations can fortify their defenses, foster…

6 hours ago

Cybersecurity Insights with Contrast CISO David Lindner | 4/26/24

Insight #1 AI is clearly becoming a problem, with headlines capturing incidents such as a deepfake audio impersonating a Chief…

6 hours ago

Segregation of Duties Remediation in Oracle ERP Cloud

Segregation of Duties in Oracle ERP Cloud: A Comprehensive Guide to RemediationControlling Risk: An Approach to Automating the Management ofSegregation…

6 hours ago