The era of AI ransomware is bad news for everyone – including ransomware criminals

The author

There’s no escaping AI’s tightening grip on the technology industry, and it turns out that ransomware criminals are just as susceptible to the reality-bending excitement as anyone else.

According to an analysis of 2,800 ransomware incidents from 2023-2024 carried out by Safe Security and MIT Sloan, 80% utilized what is termed ‘adversarial AI’ in one way or another.

Inside these attacks, researchers found evidence that AI had been used to improve phishing text, create deepfakes or voice clones, bypass CAPTCHAs, modify malware for better evasion, and boost password cracking. 

This might sound as if AI is about speeding up or improving current techniques, for example, through easier malware development or better-written phishing text. But there was also evidence that AI was transforming the scale and scope of attacks in ways that would be difficult without it: 

“Adversarial AI is now automating entire attack sequences, executing with minimal human intervention, and dynamically adapting to exploit weaknesses in real time,” wrote the authors.

The PromptLock affair

As if to prove the point that ransomware is now rapidly evolving under the influence of AI, in August ESET Research discovered what appeared to be a proof-of-concept AI ransomware experiment the company dubbed ‘PromptLock’.

PromptLock was a small, highly automated binary that they found was able to generate malicious prompts by hijacking any large language model (LLM) it found running inside the victim’s network.

It could then be used to instruct the LLM to write cross-binary scripts in real time, able not only to steal or encrypt data but to inspect its content to work out which files are the most important.

Eschewing static malware code, PromptLock generated malicious scripts as it went along. Targeting data carefully, it would be very hard to detect. The Lua scripting design meant that it could operate on any OS.

Except, it later emerged, PromptLock wasn’t a ransomware POC at all and was part of a New York University research project called ‘Ransomware 3.0’.

Panic over? In fact, there is nothing about PromptLock that attackers won’t try at some point in an era when LLMs are becoming ubiquitous in commercial environments.

Ransomware for the masses

As bad as this sounds, it’s possible that the emerging field of agentic AI poses an even bigger threat. Agentic AI is a way of taking LLM capabilities and turning these into autonomous entities that work on a task without requiring a proprietary chatbot service such as, say, ChatGPT. 

Able to communicate with one another using defined protocols, AI agents are not science fiction – large companies are already building business logic around AI agents to automate complex tasks. Unfortunately, the same applies to ransomware criminals. 

Take Xanthorox AI, a mysterious AI agent platform that looks on the surface like another chatbot. In reality, it’s a form of agentic AI because it can be used to autonomously reason, plan, and execute sequences of tasks without human intervention. 

The power of this type of system, which operates independently from LLM services or APIs, is not that ransomware criminals might abuse it to conduct complex attacks. That is a given. The bigger worry here is that anyone else could do the same, no technical expertise needed. All that a malicious agent requires is someone to prompt it using commands in natural language and, in theory, agentic AI will do the rest.

Not everyone thinks agentic AI is feasible today on this scale, but the direction of travel is unmistakable. It’s often said today that AI will make many professions obsolete. Ironically, with the flowering of AI agents, this might include ransomware criminals, too. 

Sign Up For Our Newsletter

Don’t worry, we hate spam too!

Get The Latest On Ransomware Right In Your Inbox

Sign Up To Receive Our Monthly Ransomware Newsletter
Don’t worry, we hate spam too
Share via
Copy link
Powered by Social Snap