AI deepfakes might not be real. But the threat is genuine

AI-generated fake videos have quickly emerged as a top cyber threat, as cybercriminals look to take advantage of advancing technology in their criminal activities. We asked Jason Hart, Head of Proactive Insurance, CFC, what you need to know about this growing threat, and how businesses can protect themselves.

Cyber Article 5 min Wed, Oct 9, 2024

Not long ago the ability to create fake videos, images and audio was still the stuff of science fiction. But now, tools are readily available for anyone to create highly realistic fakes at speed and scale. What could possibly go wrong?

Specialized AI-generation tools are taking the threat of social engineering to the next level—and challenging cyber security measures like never before. Just as content creators use these tools to create hyper-realistic videos and images for marketing purposes, today’s cybercriminals harness them to create AI-generated fake videos—or ‘deepfakes’—to commit fraud and other crimes. We sat down with Jason Hart, Head of Proactive Insurance, CFC, to get his thoughts on why It’s time to raise the alarm, and explore concrete ways businesses can prepare.

How the threat materializes

At CFC we’ve already seen numerous cases of criminals exploiting a free software tool called FaceSwap. Originally developed to help businesses create personalized marketing content, FaceSwap is now being misused for more harmful and illicit purposes.

Here’s how it works: FaceSwap lets users easily swap faces in photos and videos. You simply upload a video or image, pick the face you want to change and swap it with another. While this was designed for legitimate business purposes, criminals are now using it to create fake identities and manipulate media for scams.

In one recent incident, an attacker posed as a company executive through WhatsApp and invited the target to join a Microsoft Teams call. When the meeting started, it became clear the attacker was using deepfake technology to convincingly impersonate the executive. The goal? To trick the target into sharing sensitive information or gaining unauthorized access to company systems.

This serves as a stark reminder that deepfakes aren’t just a futuristic concept—they’re a genuine threat, and businesses need to be on high alert.

Steps to stay protected

There’s no silver bullet for combating the threat of deepfakes. It takes a constant process of education to know what to look for, how to minimize the risk and how to act when encountering something suspicious.

Start with these steps:  

  1. Always use official company communication channels for verifying suspicious messages or meeting invitations.
  2. Don’t share personal or sensitive information through unofficial channels or with unverified contacts.
  3. Don’t copy and paste links between SMS, WhatsApp or any other apps apart from official company communication channels.
  4. Reject requests that seem out of context or not normal, and follow up with the person via a trusted communication channel.
  5. Scrutinize links or attachments in unexpected messages. Hover over links to check their legitimacy before clicking.
  6. Ensure your device’s security software is up to date and stay on top of security awareness training.

The truth about the rising threat of deepfakes

FaceSwap may be a paid service, but free versions are popping up in messaging apps like Telegram—and they're gaining serious traction, with one bot alone already boasting over 80,000 users. As these tools become more advanced, the threat of deepfakes being used in criminal activity is only going to increase.

Taking a proactive approach is key to staying ahead. At CFC we recommend that all businesses stay informed about these risks and consider how technologies can be used against you. With deepfakes becoming more common tool for cybercriminals, it’s crucial to have strong security measures in place.

For more tips on how to stay ahead of rising cyber threats, visit CFC’s website or get in touch with our cyber team