The Rise of the Deepfake

The emergence of deepfakes is significantly accelerating cybercrime, scams, fraud, and phishing attacks. As the technology becomes more accessible and sophisticated, the potential for misuse grows, posing serious threats to individuals and organizations alike.

Staying informed about the latest developments in deepfake detection, being vigilant for signs of manipulation, and utilizing available tools are crucial steps in safeguarding against this evolving menace.

The Rise of Deepfake-Driven Cybercrime

The rapid advancement of deepfake technology is dramatically accelerating cybercrime, fraud, scams, and phishing attacks. For example, according to the entrust 2025 fraud report, in 2023 alone deepfake fraud material surged by more than 3,000%

Once a novelty, deepfakes are now a standard tool for cybercriminals, enabling them to impersonate real people and organizations with alarming realism. This surge in deepfake deception is not only putting crime into hyperdrive, it’s also undermining digital trust and posing significant challenges to individuals, businesses, and governments worldwide.

Deepfakes have become a versatile weapon in the arsenal of cybercriminals, facilitating a range of malicious activities:

  • Phishing and Business Email Compromise (BEC): Scammers use deepfakes to impersonate executives or colleagues in video calls or voice messages, tricking employees into transferring funds or divulging sensitive information.
  • Financial Fraud: By cloning voices or creating realistic avatars, fraudsters can bypass security measures, such as voice authentication, to access bank accounts or authorize transactions.
  • Romance and Celebrity Scams: Deepfakes enable the creation of convincing personas for online relationships, leading victims to send money or personal information under false pretenses.
  • Disinformation and Political Manipulation: Synthetic media can be used to fabricate statements or actions by public figures, spreading misinformation and influencing public opinion.
  • Identity theft. Deepfake technology can be used to create new identities and steal the identities of real people. Attackers use the technology to create false documents or fake their victim’s voice, which enables them to create accounts or purchase products by pretending to be that person.
  • Bypassing Security. Deepfake technologies are increasingly being used to bypass know your customer or KYC technologies and protocols. With many financial institutions requiring live video calls to verify their customer’s identity, deepfake technologies are able to create fake videos that look like the real person, including realistic heartbeats, sweating, and eye movement.
  • Sextortion. There have been thousands of reported cases of criminals using deepfake technologies to create very realistic explicit images and videos of victims and demanding substantial payments in return for not publicly sharing these fakes.
  • Document Forgery. Recent studies have shown that deepfake technologies are capable of generating identification documents including drivers licenses and passports that can fool most identity verification systems.

Real-Life Examples of Deepfake Attacks

  • Hong Kong Bank Fraud: In a sophisticated scam, fraudsters used deepfake technology to impersonate a company’s chief financial officer during a video call, convincing a bank to transfer $25 million to their account.
  • Celebrity Impersonation Scams: A French woman was deceived into believing she was in a relationship with a deepfaked version of actor Brad Pitt, resulting in her losing nearly a million euros.
  • Fake Investment Promotions: Deepfakes of public figures like Martin Lewis have been used to promote fraudulent investment schemes on social media platforms, misleading users into financial losses.
  • Ferrari CEO Impersonation Attempt. In July 2024, scammers targeted Italian automotive giant Ferrari by impersonating CEO Benedetto Vigna. Using AI-generated voice technology, the fraudsters contacted senior executives via WhatsApp, initiating conversations about a fictitious acquisition and requesting assistance. To verify the caller’s identity, the executives asked a personal question about a book Vigna had recently recommended. The impersonator’s inability to answer led to the exposure of the scam.
  • Seattle Crosswalks Hacked with Deepfake Audio. In April 2025, several crosswalks in Seattle were hacked to play deepfake audio recordings mimicking Amazon founder Jeff Bezos. The messages, which included political statements and references to controversial figures, disrupted pedestrian signals and raised concerns about public safety and the potential for AI-generated misinformation in public infrastructure.
  • Romance Scam in Scotland Using Deepfake Videos. A 77-year-old retired lecturer from Edinburgh was deceived into transferring £17,000 to a scammer posing as a romantic partner. The fraudster used AI-generated videos and messages to create a convincing persona, leading the victim to believe in the authenticity of the relationship. The scam included fabricated stories about financial hardships and plans to visit, exploiting the victim’s emotions and trust.
  • Deepfake Robocall Targeting New Hampshire Primary. In January 2024, an AI-generated robocall impersonating President Joe Biden was disseminated to voters in New Hampshire, urging them to abstain from voting in the primary election. The high-quality audio aimed to suppress voter turnout and sow confusion. The incident prompted investigations by the Federal Communications Commission and highlighted the potential of deepfakes to interfere with democratic processes.
  •  WPP CEO Impersonation Attempt. In May 2024, fraudsters attempted to scam WPP, the world’s largest advertising firm, by impersonating its CEO, Mark Read. Using AI-generated audio and video, the attackers set up a fake Microsoft Teams meeting, during which they tried to deceive an agency leader into initiating a new business venture and soliciting funds. The vigilance of WPP staff prevented the scam from succeeding.

How to Spot Deepfakes

Identifying deepfakes—whether video or voice—requires a mix of technological tools and human awareness. As deepfake technologies become more advanced and available, it’s going to get even harder to spot them.

But no matter how sophisticated the attack or attempt, the best defense is still context. For example, even if the CEO or client in the video looks very real, have they ever used a video call to make this type of request or in this kind of context?

Requests, communications, or meetings that are just very out of the ordinary can often be the first and easiest warning to double check first.

So what else should you look for in videos?

Inconsistent Facial Movements and Expressions

  • What to look for: Odd blinking patterns, lack of micro-expressions, or unnatural smiles.
  • Why it matters: Deepfake models often struggle to accurately mimic subtle human facial dynamics.

Lip-Sync Issues

  • What to look for: Misalignment between lip movements and speech, especially with complex words or fast speaking.
  • Why it matters: AI-generated audio and video can fall out of sync, revealing synthetic origin.

Lighting and Shadows

  • What to look for: Inconsistent lighting on the face versus the background, or shadows that fall in unnatural directions.
  • Why it matters: Deepfakes may fail to render lighting that matches real-world physics.

Unnatural Eye Movement

  • What to look for: Eyes that don’t track objects or the camera properly, or an unnatural glassy appearance.
  • Why it matters: Eye tracking is extremely hard to replicate accurately in synthetic video.

Robotic or Unemotional Voice Tones

  • What to look for: Voices that lack natural inflection, pause at odd moments, or sound flat or overly smoothed.
  • Why it matters: AI-generated speech often struggles to capture human emotion and variation.

Audio Quality and Background Sounds

  • What to look for: Crystal-clear voice but no ambient noise, or mismatched background audio.
  • Why it matters: Voice deepfakes may lack environmental audio or sound “too clean” to be real.

Contextual Inconsistencies

  • What to look for: Statements or behavior that seem out of character for the person, or anachronisms (e.g., referring to events out of timeline).
  • Why it matters: Deepfake creators may not fully understand the subject’s personality or history.

Artifacts and Glitches

  • What to look for: Blurring, warping around the face (especially near the eyes and mouth), or pixelation during head movement.
  • Why it matters: These are signs of AI-generated overlays, especially in lower-quality deepfakes.

The Role of Generative Adversarial Networks

As we marvel at how rapidly deepfakes have evolved, and how easy even the most vigilant can be fooled by them, it’s worth understanding where they come from. And that’s where Generative Adversarial Networks or GANs come into play.

A GAN is a type of artificial intelligence that pits two neural networks against each other with the goal of creating the most believable deepfake through competition.

The first network, known as the generator, is tasked with creating the most believable deepfake it can, like a video. It then challenges the other network, the discriminator, to tell if the video is real or fake. The more feedback the discriminator gives, the better the generator is at fine-tuning the deepfake to a point that it is almost indistinguishable from the real thing.

DEEPFAKES

Deepfake makers can now evade an unusual detection method

AI-powered deepfake videos with altered facial expressions can display realistic heartbeats through skin colour changes, which may hinder one deepfake detection method. READ MORE.

AI

This Dealership Never Existed – AI Made It All Look So Real

The gleaming John Deere tractors on Dalton Tractor and Equipment’s website look pristine, their prices low enough to attract a flurry of buyers looking to score a great deal. Their great deals pop up on google searches when buyers are looking for used tractors and other farming equipment.

But the entire operation – from the pictures of the smiling staff, to the enthusiastic customer testimonials are just an elaborate ruse created by international scammers using cutting edge AI. READ MORE.

SCAMS

AI Deepfakes Merged Scammers Faces On Stolen Identities

Hong Kong police have arrested eight people accused of operating a sophisticated scam ring that used AI to bypass banks identity checks to steal millions of dollars.

report out of Hong Kong, reveals that Triad Gangs merged their own facial features on stolen identity cards to create convincing deepfakes that fooled multiple banks. READ MORE.

AI

Gartner Predicts AI Agents Will Reduce The Time It Takes To Exploit Account Exposures by 50% by 2027

Technology-enabled social engineering will also pose a significant threat to corporate cybersecurity. Gartner predicts 40% of social engineering attacks will target executives as well as the broader workforce by 2028. READ MORE.