A GUIDE TO EVERYTHING DEEPFAKE

The Rise of the Deepfake Cyber Attack

The emergence of deepfakes is significantly accelerating cybercrime, scams, fraud, and phishing attacks. As the technology becomes more accessible and sophisticated, the potential for misuse grows, posing serious threats to individuals and organizations alike.

The best defense against AI is still Hi – Human Intelligence. That means staying informed about the latest developments in deepfake evolution, being vigilant for signs of manipulation, and making good security and privacy choices. We hope this growing guide will help.

If you’re like to know how we can help, contact Neal O’Farrell, CEO of DropVault, at neal (at) mydropvault.com

The Rise of Deepfake-Driven Cybercrime

The rapid advancement of deepfake technology is dramatically accelerating cybercrime, fraud, scams, and phishing attacks, with one expert commenting that “the rise of AI and AI-generated videos correlates with a peak in global scam activity.”

According to the Entrust 2025 fraud report, in 2023 alone deepfake fraud material surged by more than 3,000% with Deloitte predicting that losses to AI assisted fraud will top $40 billion by 2027.

Once a novelty, deepfakes are now a standard tool for cybercriminals, enabling them to impersonate real people and organizations with alarming realism. This surge in deepfake deception is not only putting crime into hyperdrive, it’s also undermining digital trust and posing significant challenges to individuals, businesses, and governments worldwide.

Deepfakes have become a versatile weapon in the arsenal of cybercriminals, facilitating a range of malicious activities including:

  • Phishing and Business Email Compromise (BEC): Scammers use deepfakes to impersonate executives or colleagues in video calls or voice messages, tricking employees into transferring funds or divulging sensitive information.
  • Financial Fraud: By cloning voices or creating realistic avatars, fraudsters can bypass security measures, such as voice authentication, to access bank accounts or authorize transactions.
  • Romance and Celebrity Scams: Deepfakes enable the creation of convincing personas for online relationships, leading victims to send money or personal information under false pretenses.
  • Disinformation and Political Manipulation: Synthetic media can be used to fabricate statements or actions by public figures, spreading misinformation and influencing public opinion.
  • Identity theft. Deepfake technology can be used to create new identities and steal the identities of real people. Attackers use the technology to create false documents or fake their victim’s voice, which enables them to create accounts or purchase products by pretending to be that person.
  • Bypassing Security. Deepfake technologies are increasingly being used to bypass “know your customer” or KYC technologies and protocols. With many financial institutions requiring live video calls to verify their customer’s identity, deepfake technologies are able to create fake videos that look like the real person, including realistic heartbeats, sweating, and eye movement.
  • Sextortion. There have been thousands of reported cases of criminals using deepfake technologies to create very realistic explicit images and videos of victims and demanding substantial payments in return for not publicly sharing these fakes.
  • Document Forgery. Recent studies have shown that deepfake technologies are capable of generating identification documents including drivers licenses and passports that can fool most identity verification systems.

Real-Life Examples of Deepfake Attacks

  • Hong Kong Bank Fraud: In a sophisticated scam, fraudsters used deepfake technology to impersonate a company’s chief financial officer during a video call, convincing a bank to transfer $25 million to their account.
  • Celebrity Impersonation Scams: A French woman was deceived into believing she was in a relationship with a deepfaked version of actor Brad Pitt, resulting in her losing nearly a million euros.
  • Fake Investment Promotions: Deepfakes of public figures like Martin Lewis have been used to promote fraudulent investment schemes on social media platforms, misleading users into financial losses.
  • Ferrari CEO Impersonation Attempt. In July 2024, scammers targeted Italian automotive giant Ferrari by impersonating CEO Benedetto Vigna. Using AI-generated voice technology, the fraudsters contacted senior executives via WhatsApp, initiating conversations about a fictitious acquisition and requesting assistance. To verify the caller’s identity, the executives asked a personal question about a book Vigna had recently recommended. The impersonator’s inability to answer led to the exposure of the scam.
  • Seattle Crosswalks Hacked with Deepfake Audio. In April 2025, several crosswalks in Seattle were hacked to play deepfake audio recordings mimicking Amazon founder Jeff Bezos. The messages, which included political statements and references to controversial figures, disrupted pedestrian signals and raised concerns about public safety and the potential for AI-generated misinformation in public infrastructure.
  • Romance Scam in Scotland Using Deepfake Videos. A 77-year-old retired lecturer from Edinburgh was deceived into transferring £17,000 to a scammer posing as a romantic partner. The fraudster used AI-generated videos and messages to create a convincing persona, leading the victim to believe in the authenticity of the relationship. The scam included fabricated stories about financial hardships and plans to visit, exploiting the victim’s emotions and trust.
  • Deepfake Robocall Targeting New Hampshire Primary. In January 2024, an AI-generated robocall impersonating President Joe Biden was disseminated to voters in New Hampshire, urging them to abstain from voting in the primary election. The high-quality audio aimed to suppress voter turnout and sow confusion. The incident prompted investigations by the Federal Communications Commission and highlighted the potential of deepfakes to interfere with democratic processes.
  •  WPP CEO Impersonation Attempt. In May 2024, fraudsters attempted to scam WPP, the world’s largest advertising firm, by impersonating its CEO, Mark Read. Using AI-generated audio and video, the attackers set up a fake Microsoft Teams meeting, during which they tried to deceive an agency leader into initiating a new business venture and soliciting funds. The vigilance of WPP staff prevented the scam from succeeding.

How to Spot Deepfakes

Identifying deepfakes—whether video or voice—requires a mix of technological tools and human awareness. As deepfake technologies become more advanced and available, it’s going to get even harder to spot them.

But no matter how sophisticated the attack or attempt, the best defense is still context. For example, even if the CEO or client in the video looks very real, have they ever used a video call to make this type of request or in this kind of context?

Requests, communications, or meetings that are just very out of the ordinary can often be the first and easiest warning to double check first.

So what else should you look for in videos?

Inconsistent Facial Movements and Expressions

  • What to look for: Odd blinking patterns, lack of micro-expressions, or unnatural smiles.
  • Why it matters: Deepfake models often struggle to accurately mimic subtle human facial dynamics.

Lip-Sync Issues

  • What to look for: Misalignment between lip movements and speech, especially with complex words or fast speaking.
  • Why it matters: AI-generated audio and video can fall out of sync, revealing synthetic origin.

Lighting and Shadows

  • What to look for: Inconsistent lighting on the face versus the background, or shadows that fall in unnatural directions.
  • Why it matters: Deepfakes may fail to render lighting that matches real-world physics.

Unnatural Eye Movement

  • What to look for: Eyes that don’t track objects or the camera properly, or an unnatural glassy appearance.
  • Why it matters: Eye tracking is extremely hard to replicate accurately in synthetic video.

Robotic or Unemotional Voice Tones

  • What to look for: Voices that lack natural inflection, pause at odd moments, or sound flat or overly smoothed.
  • Why it matters: AI-generated speech often struggles to capture human emotion and variation.

Audio Quality and Background Sounds

  • What to look for: Crystal-clear voice but no ambient noise, or mismatched background audio.
  • Why it matters: Voice deepfakes may lack environmental audio or sound “too clean” to be real.

Contextual Inconsistencies

  • What to look for: Statements or behavior that seem out of character for the person, or anachronisms (e.g., referring to events out of timeline).
  • Why it matters: Deepfake creators may not fully understand the subject’s personality or history.

Artifacts and Glitches

  • What to look for: Blurring, warping around the face (especially near the eyes and mouth), or pixelation during head movement.
  • Why it matters: These are signs of AI-generated overlays, especially in lower-quality deepfakes.
A Quick Note

When videos produced by Google’s new Veo 3 AI video platform began to circulate in May 2025, the biggest telltale was that the quality and realism were just too good. Almost flawless. And life’s just not like that. Some observers suggested that it appeared Google’s AI had been trained by the massive CBS/Paramount library of TV shows and movies. It’s the little things!

The Role of Generative Adversarial Networks

As we marvel at how rapidly deepfakes have evolved, and how easy even the most vigilant can be fooled by them, it’s worth understanding where they come from. And that’s where Generative Adversarial Networks or GANs come into play.

A GAN is a type of artificial intelligence that pits two neural networks against each other with the goal of creating the most believable deepfake through competition.

The first network, known as the generator, is tasked with creating the most believable deepfake it can, like a video. It then challenges the other network, the discriminator, to tell if the video is real or fake. The more feedback the discriminator gives, the better the generator is at fine-tuning the deepfake to a point that it is almost indistinguishable from the real thing.

Defending Against Deepfake Attacks

The good news is that the best defense against any AI-generated attack is still HI – human intelligence. Your vigilance and awareness, your caution and common sense, and your decisions and choices will be your best defenses for the foreseeable future.

In the meantime:

  • Get clients or colleagues to agree that important decisions or transactions will never by concluded by video calls. Choose some other channel instead.
  • Think about the context. Context is the most powerful red flag in detecting deepfakes – if a request or method is unusual, out of the ordinary, a surprise to you, then investigate further.
  • Create code or safe words. These are pre-agreed words or phrases that can be used to easily authenticate a phone or video call.
  • Protect your communications and data, especially email. A recent analysis of 19 million stolen passwords found that 94% of them were either weak or reused. Hackers will often target email to research victims and spoof email addresses.
  • Encrypt where you can. Stolen data is of no value to hackers if it’s properly encrypted.
  • Create awareness reminders for yourself, team members, employees, and even family members.
  • If a client or colleague insists on a video call for any sensitive transaction, ask them to move/point the camera around the room during the conversation. That’s almost impossible to deepfake.

But Remember!

Whatever the deepfake, whether it’s video, voice cloning, forged documents, fake resumes, synthetic identities, or entirely fictitious websites, the more alert we are at spotting them, the harder criminals will work to make them harder to spot.

It’s a constant battle of wits. So no matter how helpful these prevention tips are, you can never drop your guard. Deepfake attacks are getting smarter and more advanced and chances are we’ll all be targeted with them eventually and frequently.

Protecting The Most Vulnerable

In any workplace, any employee can be targeted by and fall for a deepfake attack. But outside the workplace, the most vulnerable targets are kids and the elderly.

Kids and teens are highly susceptible to deepfake porn and sextortion attacks and these attacks can have a devastating psychological and emotional impact.

For seniors and the elderly, investment, romance, and tech support scams steal billions of dollars every year from the most vulnerable. And bigger than the financial loss is often the emotional impact. The loss of security, the shame and guilt, and the anger.

So if you have kids, or older parents or grandparents, it’s vitally important to not only teach them how to spot a deepfake or any other kind of scam, but to keep reminding them so that they don’t drop their guard when they need it most.

Demo

Want to see what the future of deepfake videos looks like?

Check out this video of a busy car show. Except it’s not really a car show but a complete deepfake created entirely by prompts or instructions.

Meaning the creator just gave the AI platform (Google Veo), a set of instructions about what the final video should look like.

Can you spot any of the telltale giveaways?

Image
DEEPFAKES

Deepfake Makers Can Now Evade An Unusual Detection Method

AI-powered deepfake videos with altered facial expressions can display realistic heartbeats through skin colour changes, which may hinder one deepfake detection method. READ MORE.

AI

This Dealership Never Existed – AI Made It All Look So Real

The gleaming John Deere tractors on Dalton Tractor and Equipment’s website look pristine, their prices low enough to attract a flurry of buyers looking to score a great deal. Their great deals pop up on google searches when buyers are looking for used tractors and other farming equipment.

But the entire operation – from the pictures of the smiling staff, to the enthusiastic customer testimonials are just an elaborate ruse created by international scammers using cutting edge AI. READ MORE.

SCAMS

AI Deepfakes Merged Scammers Faces On Stolen Identities

Hong Kong police have arrested eight people accused of operating a sophisticated scam ring that used AI to bypass banks identity checks to steal millions of dollars.

report out of Hong Kong revealed that Triad Gangs merged their own facial features on stolen identity cards to create convincing deepfakes that fooled multiple banks. READ MORE.

AI

Gartner Predicts AI Agents Will Reduce The Time It Takes To Exploit Account Exposures by 50% by 2027

Technology-enabled social engineering will also pose a significant threat to corporate cybersecurity. Gartner predicts 40% of social engineering attacks will target executives as well as the broader workforce by 2028. READ MORE.