April 19

5 Ways cybercriminals are using AI: Deepfakes

0  comments

Synthetic media has many positive uses outside of entertainment. Students use it to learn anatomy and engineers use it to create complex industrial designs.  It’s widely used in marketing, advertising, training, and customer service. Synthetic media itself is not a risk.  Deepfakes are a subset of synthetic media, and this is where the bulk of the threat resides.

The term ‘deepfake’ is a blend of ‘deep learning’ and ‘fake,’ and to be clear, not all deepfakes are dangerous. Some are just play, some are just business. Deepfakes made it possible for young Luke Skywalker to appear in The Mandalorian and Anthony Bourdain to posthumously speak new words in the film about his life. The former was exciting for fans, the latter was upsetting and widely criticized. Neither of these deepfakes created a risk, but both show the potential of deepfake technology. The scary thing about deepfakes is not that Disney can recreate a young Luke Skywalker, but that a threat actor can recreate your boss. The deceptive and malicious deepfake is the threat, and attacks using this type of deepfake are increasing in audacity and scope.

Cybercrime and deepfakes

Malicious deepfakes show up in election campaigns, fake celebrity videos, news media, and even legal proceedings that rely on video or audio evidence. We’re going to focus on cybercrime, but there’s a ton of information out there about these other topics.

The first significant deepfake incident occurred in 2017 when a Reddit user called ‘DeepFakes’ published adult videos that swapped the faces of adult film actors with those of mainstream celebrities. This is not considered an attack, but it is significant because it caused widespread outrage and concern around privacy violations and deepfake detection.  Reddit and other social platforms banned the content and the user, and a few legislative bodies responded with laws against deepfake pornography and unauthorized use of one’s likeness. 

Video or voice phishing

The first known deepfake attack occurred in 2019 when a threat actor impersonated an executive and directed a subordinate CEO to transfer funds to a fake supplier. This attack used voice phishing, or vishing, to manipulate the victim.  

The company CEO, hearing the familiar slight German accent and voice patterns of his boss, is said to have suspected nothing…

Several months later, a different group of threat actors convinced a bank employee to transfer $35 million to multiple accounts. The following details are taken from the court document (p2):

… on January 15, 2020, the Victim Company’s branch manager received a phone call that claimed to be from the company headquarters. The caller sounded like the Director of the company, so the branch manager the call was legitimate. The branch manager also received several emails that he believed were from the Director that were related to the phone call. The caller told the branch manager by phone and email that the Victim Company was about to acquire another company, and that a lawyer named Martin Zelner (Zelner) had authorized to coordinate procedures for the acquisition. The branch manager then received several emails from Zelner regarding the acquisition, including a letter of authorization from the Director to Zelner. Because of these communications, when Zelner asked the branch manager to transfer USD 35 million to several accounts as of the acquisition, the branch manager followed his instructions. The Emirati investigation revealed that the defendants had used “deep voice” technology to simulate the voice of the Director. In January 2020, funds were transferred from the Victim Company to several bank accounts in other countries in a complex scheme involving at least 17 known and unknown defendants.

This attack didn’t rely completely on a deepfake vishing play. There were fake emails, fake people (Zelner), fake business deals, and a bunch of real accounts created under false pretenses.

Fake video meetings

Another sophisticated deepfake attack occurred in February 2024, when threat actors targeted a multinational company located in Hong Kong:

“…the worker had grown suspicious after he received a message that was purportedly from the company’s UK-based chief financial officer. Initially, the worker suspected it was a phishing email, as it talked of the need for a secret transaction to be carried out.

However, the worker put aside his early doubts after the video call because other people in attendance had looked and sounded just like colleagues he recognized …

…Believing everyone else on the call was real, the worker agreed to remit a total of $200 million Hong Kong dollars – about $25.6 million…

According to Hong Kong police, “everyone [he saw] was fake.”

This is another multi-step attack. Threat actors researched the company, studied publicly accessible videos, and used this reconnaissance to recreate several employees for a video conference. Fake emails and fake scenarios were created to support the scam and receive the money.

It’s possible there was much more to these attacks than we know. Threat actors may have used an advanced persistent threat (APT) to gather information in the network, or there may have been an insider threat. Whatever prep work they did prior to the attacks enabled these threat actors to create very effective deepfakes.

Extortion

Deepfake attacks aren’t just about tricking someone into a fraudulent business transaction. Threat actors can create deepfake scenarios that compromise individuals and companies. A video of a CEO speaking or acting in a controversial manner can lead to a drop in share prices, lost sales, and a bunch of angry internet comments that never go away. Studies have shown that most of the reputational damage occurs within 24 hours after an incident. Criminals create these damaging deepfakes and then attempt to extort payment in exchange for not releasing the content. 

Protecting yourself from deepfake attacks

There’s no one single way to protect yourself from a deepfake attack. Like most cybercrime, it’s about education, vigilance, and multiple layers of security. Successful deepfake attacks like those mentioned above required a chain of events including reconnaissance and multiple types of email attacks. Attacks like these may be prevented early on with a comprehensive cybersecurity platform and ongoing security awareness training. You can also review privacy settings on social media accounts and limit the types of information published by the company.

Deepfakes are always improving, but they are rarely perfect. Watch for things like strange eye or facial movements and inconsistent background lighting or shadows. Verify the authenticity of any unusual communication. The Zero Trust philosophy is a strong defense against deepfakes. Never trust, always verify. Assume a hostile environment and scrutinize everything.

Did you know…

According to a recent report from Barracuda and the Ponemon Institute, 50% of IT pros expect to see an increase in the number of attacks due to the use of AI. Get the details on this and a lot more in our new e-book, Securing tomorrow: A CISO’s guide to the role of AI in cybersecurity. This e-book explores security risks and exposes the vulnerabilities that cybercriminals exploit with the aid of AI to scale up their attacks and improve their success rates. Get your free copy of the e-book right now and see all the latest threats, data, analysis, and solutions for yourself.

**For some interesting reading on AI imagery vs computer-generated graphics, see this Quora thread.


Tags


You may also like

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

Get in touch

Name*
Email*
Message
0 of 350