Search
Close this search box.

Artificial Intelligence Makes Ransomware More Dangerous

The author

It just keeps getting easier to create ransomware. Last year, schoolkids were doing it. Just last month, ransomware went open source. And now ChatGPT is enabling script kiddies to write functional malware. But just how big a threat is this, really?

In the real world, the part where AIs can write code isn’t particularly scary. Yes, thanks to AI, advanced coding skills aren’t required to make malware anymore, but that’s been true for some time. You’d be surprised what you can get written for little money using Amazon’s Mechanical Turk.

Code is cheap. With even modest knowledge you can build a pretty complex application using little more than Stack Exchange as your resource. The part where AIs can churn out convincing phishing e-mails in bulk, however, is going to be harder.

The code churned out by ChatGPT in the linked article is quite simple malware. It’s “viable,” but that isn’t the same as “good.” That will change over time, however.

Today’s ChatGPT is trained on about 175 billion parameters. The next generation of AIs are already being built to deal with training sets in the multiple trillions of parameters. Today’s AIs are to the AIs of 2030 as the Bulletin Board Systems (BBSes) of the 1980s were to the Internet of the early 2000s. That’s the entire point of AI: It will just keep getting better.

So will we eventually see AI-created ransomware in the wild? Yes. Should black hat coders be worried about job security? Not for some time; checking those AIs for errors is going to require experienced developers for at least the next decade.

AI is also a double-edged sword. Defenders make use of machine learning; nearly every endpoint solution has behavior-based detection as part of its arsenal, and they’re becoming more common in network-based defenses as well.

The AI developments we should be concerned about are the ones that make phishing even easier, because that directly attacks individual humans, and it will be a long time (if ever) before any humans outside of the 1% will be able to afford AI protection that can keep up with the sophistication of the bad guys, as this story notes: AI-Generated Phishing Emails Just Got Much More Convincing

And the phishing threat goes far beyond having AIs write emails. Consider this article: VALL-E AI Can Mimic a Person’s Voice from a Three-Second Snippet. As with ChatGPT, this AI is in the early stages of development. It requires high-quality audio for training data—any old recording won’t do. And it takes time to make a convincing fake. But these fakes can be created, and lots of people you might want to imitate (such as CEOs) give long keynote speeches into high-quality audio equipment that’s conveniently recorded and posted online.

AIs will make the price of entry into a life of cybercrime even lower than it is today, but the cost of being a cybercriminal is already really low. Ransomware as a Service has already removed the requirement for would-be extortionists to know much about cryptography, advanced coding, or even how to obtain access to other networks. When we really need to start worrying is when it becomes cheaper and easier to make a living being an AI-assisted cybercriminal than it is to work within the legal confines of our existing social and economic systems. But we wouldn’t let that happen … would we?

Sign Up For Our Newsletter

Don’t worry, we hate spam too!

Get The Latest On Ransomware Right In Your Inbox

Sign Up To Receive Our Monthly Ransomware Newsletter
Don’t worry, we hate spam too

Is This Your Business?
Get In Touch

Contact Us To Sponsor Your Business Listing & Learn More About The Benfits.

Before You Go!
Sign up to stay up to date with everything ransomware

Sign Up To Receive Our Monthly Ransomware Newsletter
Don’t worry, we hate spam too

JUST RELEASED: The 2024 State of Ransomware Survey is in.

A REVEALING REPORT FOR IT PROFESSIONALS BY IT PROFESSIONALS

Share via
Copy link
Powered by Social Snap