Pretty soon there won't be a single cybercrime, scam, or fraud that doesn't use AI

Studies have found that anywhere between 50% and 85% of cyber attacks, scams, and fraud exhibit some element of AI. Soon many will be orchestrated entirely by AI.

And it’s not necessarily about the creation of entirely new crimes but fuelling all the traditional scams at a scale, pace, and accuracy not witnessed before.

As new technologies emerge to counter this threat, those technologies are just as quickly defeated by AI.

But the good news is that the best defense against AI is HI, and the most powerful crime fighting technology of them all is still wedged right between your ears

Image link

 

AI CRIME – FACT, FICTION, OR FEARMONGERING?

 

AI crime is here to stay, it’s getting more advanced every single day, and pretty soon every type of cybercrime, scam, and fraud will involve a very high level of AI. Many of these crimes will be created and managed entirely by AI. 

Deepfake attacks alone are believed to be costing businesses more than $40 billion a year, triggering the World Economic Forum to warn that deepfakes now pose one of the greatest global risks.

We’ve identified more than 20 ways AI is already changing crime and criminal behavior, and some of them quite disturbing. But it doesn’t mean we’re powerless.

 

1.   MAKING THE TREASURE TROVE OF STOLEN DATA USABLE

 

The criminal world is drowning in stolen data. In 2021 alone it’s estimated that more than 40 billion records were exposed or stolen in data breaches. According to Juniper Research, nearly 150 billion records were compromised in just the last 5 years.

There are an estimated 24 billion stolen credentials (username and password combos) currently circulating on the dark web. That’s four complete sets of credentials for every human on earth. And in January 2024 a stash of more than 26 billion records was discovered on an unprotected server, data from multiple recent and previous data breaches.

Until recently, criminals were limited by time and tools in what they could do with all this information. But AI is making it much easier for cyber criminals to sort through these billions of records and solve one of the biggest criminal challenges – connecting the dots. Analyzing those vast troves of stolen information to find the pieces that match, then putting them together so they can be used not just to commit convincing crimes, but at scale.

Check below where we explain how AI agents are being used to accelerate one type of attacks known as credential stuffing.

 

2.  A NEW WAVE OF IDENTITY THEFT

 

Speaking of stolen data, identity theft has been the top consumer crime for more than a decade and relies on a constant feed of personal information. The more information, and the more accurate that information, the better. And that’s where AI comes in.

Not only is AI making it easier to capitalize on the billions of personal records already stolen in data breaches, it’s making it much easier to launch more (and more convincing) identity thefts. Is it already here? In its third annual Identity Fraud Report, verification company Sumsub reported that in the US alone, deepfake-based identity fraud surged 1,740% in 2023.

 

3.  A SPIKE IN SYNTHETIC IDENTITY THEFT

 

Synthetic identity theft is nothing new, but like so much else in crime, AI is making it much easier to grow and scale. Synthetic identity theft is where criminals use a mixture of real and concocted information to create entirely new identities.

For example, it could include a real Social Security number and address, combined with entirely made-up information like photos and utility bills. Using these hybrid identities, thieves are able to open up multiple bank accounts, credit card accounts, and lines of credit.

These identities could also be used to create entire personas. One security expert predicted that a synthetic identity could be used to apply for employment benefits, housing assistance, food stamps, and other benefits totalling more than $2 million per identity.

The losses are mounting with the auto industry alone losing more than a billion dollars every year to this type of fraud.

 

4.  A NEW GENERATION OF PHISHING ATTACKS

 

In the year following the launch of ChatGPT there was a reported 1,265% increase in phishing emails and a nearly 1,000% rise in credential phishing.

More of these phishing attacks are successfully tricking recipients because AI  is making it easier to create, launch, and manage massive spam and phishing campaigns that are so well-researched and convincing, they’re almost impossible to spot. And accurately translating these phishing and business email compromise (BEC) emails into multiple languages is also breeze for AI.

A variety of recent studies have shown that anywhere between 50% and 85% of today’s phishing attacks display some element of AI.

 

5.  FILLING IN THE BLANKS

 

One of the best ways to verify whether a person is real or not is to look at their past. What does the Internet says about them, what evidence is there to prove or at least suggest that they really exist?

Before AI, it was almost impossible to create a believable fake Internet history. But thanks to AI, it’s much easier to create very detailed and believable online profiles and histories, from professional websites to complete social media profiles, LinkedIn pages, employment history, and even certifications.

AI tools have already been shown to be able to create realistic websites within minutes, either to fake the identity of a criminal, as a front for fraudulent job offers, to launch B2B frauds, or as part of phishing or BEC campaigns.

A fake website can include logos, team members with complete profiles, product and service descriptions, social media, testimonials and reviews, blogs, press releases, physical addresses and phone numbers and so on. So if you want to claim, for example, that you’ve been running your own construction business for the last 20 years and employ 30 people, AI will make that happen. Or at least appear to happen.

 

6.  A REAL WORLD OF DEEPFAKES

 

In just the last year we’ve seen a huge increase in reports of the success of AI in developing very realistic fake versions of real humans. Generating photos, voices, and videos that are so realistic and lifelike it’s almost impossible for even friends and family members to distinguish them from the real being.

And with the growth in use of video for everything from training to marketing and PR to social media, snippets of our voices and faces exist everywhere. Scammers are now able to use even just a few seconds of these snippets to create complete clones of our voice.

And the scams are working. In February 2024 a bank revealed it had lost $25 million to a scam based on a series of conversations that were completely fabricated by deepfake technology. According to the World Bank, losses fuelled by generative AI could reach $40 billion by 2027

In one security demonstration, a security expert was able to trick an employee of the 60 Minutes program into sharing the passport number of one of their correspondents, simply by using a clone of her voice. The attack took less than 5 minutes to construct.  CHECK OUT OUR GUIDE TO UNDERSTANDING AND SPOTTING DEEPFAKES.

 

7.  RESEARCHING FOR BIGGER PHISH

 

Not all targets are created equally, and whether it’s targeting a CEO or other executive, or a wealthy consumer or their advisers, AI will be much better at identifying and sorting the best targets.

That includes doing the in-depth background research and setting up a social engineering or phishing attack that will be very hard to detect or defend against.

For example, an AI agent could easily be programmed to build detailed portfolios on specific targets, including every public mention of they and their families, their businesses and investments, their social media profiles, their political activities, any contact information, interests, hobbies, charities and so on.

 

8.  BEATING PASSWORDS IS GETTING MUCH EASIER

 

For many of we humans, the humble password is often the first and only line of defense guarding many of the things we hold in great value.

And AI is setting its sights on them. In some recent demonstrations, AI-driven password crackers were programmed to break a collection of millions of passwords stolen in recent data breaches.

According to reports, 81% of the passwords were cracked in less than a month, 71% in less than a day, and 65% in less than an hour. Any seven-character password could be cracked in six minutes or less.

The makers of one popular password cracking tool claim that it can crack more than 50% of the most commonly used passwords in less than a minute. And researchers in the UK used AI to crack passwords just by listening to the sounds of keystrokes, and with more than 90% accuracy.

 

9.  IMPROVING CREDENTIAL STUFFING

 

Credential stuffing attacks use passwords and usernames already exposed in previous data breaches, and there are more than 25 billion such combos freely available on the Internet.

The goal of the attack is to find any other sites or accounts that are using the same password and username combination – one in three users are known to reuse their passwords.

Trained AI agents are able to accelerate this process, by testing stolen passwords on millions more sites than a human can, by behaving like a human if they’re challenged by a website, and by learning the nuances of different websites and adjusting accordingly.

AI agents are also better at bypassing CAPTCHA and adapting to authentication systems.

 

10.  GREAT AT FINDING SECURITY HOLES

 

For criminals, finding and exploiting the millions of security holes that exist every day can be a costly, time-consuming, and repetitive task. A task that AI is ideally suited for.

AI can also scan billions of lines of code almost instantly to discover flaws, weaknesses, or mistakes. It’s also very good at writing exploits to take advantage of the vulnerabilities it discovers.

 

11.  EVADING ANTIVIRUS SOFTWARE

 

ChatGPT, perhaps the most popular of all AI tools, has been used to not only create malicious code but also code that’s capable of changing quickly and automatically to evade antivirus software.

In early 2023, security researchers launched a proof-of-concept malware called BlackMamba that used AI to both eliminate the need for the command-and-control infrastructure typically used by cyber criminals, and to generate new malware on the fly in order to evade detection.

And AI is also helping to make malware smarter and more capable, able to do more damage, infiltrate more deeply into a network, morph and hide, and find and steal the most valuable data.

Researchers recently showed that AI was capable of generating more than 10,000 new malware variants by simply modifying existing malware, and at the same time reaching an evasion rate of nearly 90%.

 

12.  MASTERS OF MISINFORMATION

 

Creating and spreading misinformation and disinformation is something that AI seems born to excel at. Using the same techniques and tactics as phishing campaigns, AI can be deployed to create and optimize the distribution of all kinds of misinformation, disinformation, fake news and images, and conspiracy theories.

And it will also present a frightening threat to elections and democracies. One leading AI expert admitted to being completely terrified of the 2024 election and an expected misinformation tsunami, while another expert suggested that AI will turn elections into a train wreck. Misleading AI-generated content will be at the forefront of these unsettling attacks.

 

13.  A HOTBED OF (S)EXTORTION

 

In 2023, the FBI and the Department of Justice warned that the fastest-growing crime against children was sextortion – using fake but highly realistic social media profiles to trick teens and kids into sharing sensitive or sexually explicit photos and videos, and then extorting them for money with the threat of sharing that content with family or publicly.

In 2021 the National Center for Missing and Exploited Children received 139 reports of sextortion. Two years later, that number had jumped to 26,000. AI is expected to take that kind of crime even further by generating deep fake pornographic photos and videos that appear to include the face or likeness of the victim.

One West African gang is believed to be responsible for nearly half of all global sextortion targeting minors, even advertising “how to” guides in chat rooms and on social media sites.

 

14.  A THREAT TO ECOMMERCE

 

Global security firm Sophos demonstrated how they were able to create a complete fraudulent website just using AI. The site included hyper realistic images, audio, and product descriptions, a fake Facebook login, and a checkout page able to steal user login credentials and credit cards. They were also able to create hundreds of similar websites in a matter of just minutes and with a single button.

In April 2025 security researchers found a website for a  John Deere dealership that was a complete deepfake, and designed to defraud customers out of massive deposits.

Juniper Research estimates that global losses from e-commerce fraud from 2023 to 2027 will surpass $343 billion. Those losses will likely be shared by businesses and consumers.

 

15.  A NEW GENERATION OF CRIMINALS

 

AI will make it much easier for unsophisticated and entry-level criminals or wannabes to scale up more advanced and complex attacks with fewer resources or costs.

According to security firm Trend Micro “One thing we can derive from this (AI) is that the bar to becoming a cybercriminal has been tremendously lowered. Anyone with a broken moral compass can start creating malware without coding know-how.”

AI is making it much easier for entry level criminals to edit and modify kits that they purchase on the black market. They can then use those kits to create massive global phishing attacks, write advanced malware, launch credential stuffing attacks, crack passwords, and build fraudulent ecommerce sites.

 

16.  AI FORGING AHEAD

 

Another AI advantage that will make crime easier and life harder is deepfake forgeries. AI is very capable of forging and counterfeiting the most complicated documents, including birth certificates, driver’s licenses, and even passports.

It’s also capable of forging all the stuff that’s supposed to make counterfeiting much more difficult – things like watermarks, holograms, microprinting, special fonts and logos, and of course, a user’s photo and even signature.

AI can also forge utility bills, which will make identity theft and other frauds much easier to commit. And it can easily forge and create paper trails of invoices that can be used to trick companies into inadvertently paying scammers.

 

17.  A THREAT TO PRIVACY

 

AI learns and grows from nothing else but data, and has an insatiable appetite for more. And that will include your personal information.

With so much of this information, chances are AI will know far more about you than you’re comfortable with. About your behavior, habits, choices, preferences, political and social opinions, locations and connections, and so on. And also, and perhaps mistakenly, make inferences about you based on inaccurate or incomplete data.

 

18.  THE WEAPONIZATION OF PORN

 

The Deepfake porn threat has already emerged in a number of ways, from revenge porn to sextortion to a tool for humiliating people. But the threat goes far beyond humiliation.

In 2022 as 25-year-old Cara Hunter was running for political office in Northern Ireland, her world and campaign were rocked by the release of a very graphic deepfake porn video purporting to be her. The most likely motive was to embarrass and humiliate her, and ultimately to either force her to drop out of the race or to influence electors against her. She didn’t, and she won, although by just a handful of votes.

This was one of the first recorded instances of deepfake porn being used to influence political races and elections, and likely won’t be the last.

 

19.  THE IMPACT ON LAW ENFORCEMENT

 

If AI really is going to mean more criminals, more crimes, and more victims, and especially victims of financial crimes and scams, chances are the first place those victims are going to turn is their local police department.

We know from two decades of just fighting identity theft that no police department has the resources to investigate or prosecute such an overwhelming number of crimes that are usually way beyond their jurisdiction. They don’t need more of these crimes.

And the rapid advancements in AI-driven document forgery will also present additional challenges for law enforcement. It will be nearly impossible, at least on initial inspection, to recognize a fake driver’s license, vehicle registration, and proof of insurance documents.

 

20.  A THREAT TO TRUST

 

One of the most important ingredients in the success of humans and communities is our ability to trust each other. Trust is hard earned and easily squandered, and thanks to AI, it’s becoming a threatened species.

We are quickly approaching a point where we humans will not trust ourselves to believe anything we see or hear. No matter how it’s presented, who’s presenting it, or how thoroughly it’s verified and authenticated. This trust deficit is likely to seep into every part of human life.

In a recent article by Newsweek “These developments will have far-reaching consequences. Schools and universities will face AI-generated submissions, undermining their traditional tests. Businesses will struggle to identify capable employees, as the credentials they relied on become watered down. Political leaders skilled at mastering their image in a TV context will find they no longer convey the same credibility and influence. The very concept of expertise—something people traditionally associated with professional-sounding tone and language—will lose the signal that has given credibility to its purveyors.”

 

21.  SCAM CALL CENTERS IN HYPERDRIVE

 

There are thousands of scam call centers operating around the world, usually beyond the reach of the law, and each year bilking millions of victims out of billions of dollars through tech support scams, investment scams, and romance scams.

As soon as the operators of these centers start to deploy AI to run the centers and scams, we expect even more victims.

AI can help these criminals eliminate most of their setup and operating costs (like buildings and people), churn out even more of these calls, and use conversational AI to make the calls much more convincing and effective.

That means lower costs, bigger profits, and more victims. All great incentives for more of these criminals to move to AI. Learn more.

Check out our article on Fighting AI Crimes with some simple tips.

 

ABOUT THE AUTHOR

Neal O’Farrell is one of the longest serving security experts on the planet, 40 years and counting. He has advised half a dozen governments, developed advanced encryption systems for the military and financial sectors, and won awards for his work to protect consumers from cybercrime and fraud. Meet him.

RELATED

A QUICK GUIDE TO DEEPFAKES

The deepfake has emerged as the most potent type of AI-driven crime tool, and now available at low cost to every entry level hacker on the planet. The more you know about these attacks, the safer you’ll be.

Have you checked out Secure In 60 Seconds?

We created a collection of more than 40 short security awareness videos called Secure In 60 Seconds and they’re available free of charge for anyone to view and use.

VIEW THEM ALL HERE
Image link
RECENT ARTICLES BY NEAL O'FARRELL

Cybercriminals Are Using AI to Scam You – and You May Not Even Know It

Criminals are Posing as the FTC to Try to Steal Your Money and Information

I'm an Identity Theft Expert. This Is One of the Scariest Types of Fraud

Considering ID Theft Protection? A Cybersecurity Expert Breaks Down the Facts

Scammers Love Your Bank Accounts. Here's How to Keep Them Safe