Hype vs. Reality: AI in the Cybercriminal Underground

This report discusses the state of generative artificial intelligence (AI) in the cybercriminal underground: how cybercriminals are using ChatGPT, how they're adding ChatGPT features to their criminal products, and how they’re trying to remove censorship to ask ChatGPT anything.

By David Sancho and Vincenzo Ciancaglini

Generative artificial intelligence (AI) has seen an uncanny growth in the industry lately because of constant media exposure and new disruptive applications being released almost on a weekly basis. But do we see the same kind of disruption around this technology in the criminal underground?

No.

The cybercriminal world is no stranger to AI applications; malicious actors have been abusing AI even before generative AI became a massive industry trend. In fact, we investigated how the criminal underground was exploiting AI in a paper co-published with Europol and UNICRI. The report, "Malicious Uses and Abuses of Artificial Intelligence," was released in 2020, just one week after GPT-3 (Generative Pre-trained Transformer-3) was first announced.

Since then, AI technologies including GPT-3 and ChatGPT have taken the world by storm. The IT industry has seen an eruption of new large language models (LLMs) and tools that either try to compete with ChatGPT or fill in the gaps left by OpenAI. These AI technologies promote open-source or specialized models, and new research and applications on how to improve and even attack LLMs.

It could be expected that the criminal underground picked up on these innovations and built new dreadful applications, and much of the press seems to favor this theory. But is this really the case? Interest in generative AI in the underground has certainly followed the general market hype, as evidenced by the fact that we are starting to see sections on underground forums dedicated to AI, such as the English hacking forum Hack Forums, which now has a section called "Dark AI."

HackForums now has a Dark AI section

Figure 1. Hack Forums now has a "Dark AI" section

However, when we look at the topics discussed in that section, we have observed that they are far from any disruptive breakthroughs. In the criminal world, most of the conversations surrounding AI discuss possible new ways of removing the censorship limitations of ChatGPT or asking about new criminal alternatives to ChatGPT.

How criminals use ChatGPT

In our observations of the criminal underground, we noticed that there is a lot of chatter about ChatGPT — what it can do for criminals and how they can best use its capabilities. This is not surprising: ChatGPT has become useful to all developers everywhere, and criminals are not an exception.

There are a few main topics around which these conversations are centered. We will discuss what we have seen in three subsections: how cybercriminals are using ChatGPT themselves, how they're adding ChatGPT features to their criminal products, and how they’re trying to remove censorship to ask ChatGPT anything.

Criminals and developers use ChatGPT to improve their code

Malware developers tend to use ChatGPT to generate code snippets faster than they otherwise could. Based on our observation, they typically ask ChatGPT for certain functionalities. They would then incorporate AI-generated codes into malware or other components.

There are many examples of malware developers claiming to have used ChatGPT to come up with their creations. In the Cracked.io forum, a user was offering a “SoundCloud viewbot” that was supposedly almost completely generated by ChatGPT from separate prompts and was pieced together by the user.

Figure 2. A post on Cracked.io promoting a bot that was supposedly fully programmed by ChatGPT

Figure 2. A post on Cracked.io promoting a bot that was supposedly fully programmed by ChatGPT

This bot is not a solitary case. Developers, both good and bad, are turning to ChatGPT to help them refine their code or straight up generate base code that they would later tweak or refine.

Criminals offer ChatGPT as a feature in some of their malicious offerings

ChatGPT works best at crafting text that seems believable, which can be abused in spam and phishing campaigns. We have observed how some of the cybercriminal products in this space have started to incorporate a ChatGPT interface that allows customers to create spam and phishing email copies.

For example, we have observed a spam-handling piece of software called GoMailPro, which supports AOL Mail, Gmail, Hotmail, Outlook, ProtonMail, T-Online, and Zoho Mail accounts, that is mainly used by criminals to send out spammed emails to victims. On April 17, 2023, the software author announced on the GoMailPro sales thread that ChatGPT was allegedly integrated into the GoMailPro software to draft spam emails.

Figure 3. GoMailPro allegedly integrates ChatGPT in its interface to help customers draft spam emails

Figure 3. GoMailPro allegedly integrates ChatGPT in its interface to help customers draft spam emails.

This shows that criminals have already realized just how powerful ChatGPT is when it comes to generating text. Also, ChatGPT supports many languages, which is an enormous advantage to spammers, who mainly need to create persuasive texts that can fool as many victims as possible.

Criminals ‘jailbreak’ ChatGPT to remove censorship limitations

ChatGPT is programmed to not reply to illegal and controversial topics, a limitation that greatly affects the kind of questions that people can ask it. However, illegal or malicious topics are the ones that criminals would precisely need advice on. Therefore, some people in this community are focused on creating, finding, and sharing ChatGPT prompts that can bypass the chatbot’s censorship limitations.

In the “Dark AI” section on Hack Forums, a popular thread is “DAN 7.0 [FFEN],” wherein ChatGPT jailbreak prompts are discussed and shared. “FFEN,” which stands for “Freedom From Everything Now,” is a jailbreak prompt that creates a sort of ChatGPT alter ego and enables replies that have all ethical limitations removed. The FFEN jailbreak prompt is a modified version of the original DAN prompt, which stands for “Do Anything Now.”

Figure 4. A post from HackForums describing a ChatGPT jailbreak prompt

Figure 4. A post from Hack Forums describing a ChatGPT jailbreak prompt

DAN is the original jailbreaking prompt, but other prompts that create an alternate ChatGPT alter ego that is free from censorship, such as “Trusted Evil Confidant,” also exist. In fact, there is a whole page of jailbreak prompts that users can use and then vote up or down on.

Figure 5. The “Evil Confidant” ChatGPT prompt on “Jailbreak Chat,” a repository of jailbreak prompts

Figure 5. The “Evil Confidant” ChatGPT prompt on “Jailbreak Chat,” which is a repository of jailbreak prompts

OpenAI, the developer of ChatGPT, usually reacts to and disables jailbreak attempts. These restriction bypasses are a constant game of cat and mouse: as new updates are deployed to the LLM, jailbreaks are disabled. Meanwhile, the criminal community reacts and attempts to stay one step ahead by creating new ones.

Other criminal AI-based chatbots

Since around June 2023, there has been an emergence of language models on underground forums claiming to be tailored to criminals.

All of them seem to address similar cybercriminal pain points: the need for anonymity and the ability to evade censorship and generate malicious code. However, some language models appear to be more legitimate than others, and it is tricky to discern which ones could be actual LLMs trained on custom data. On the other hand, they could just be wrappers around a jailbroken prompt for ChatGPT. If so, these tools would only be a clever way to scam customers by riding the GPT wave.

The rise and fall of WormGPT

WormGPT was first announced on Hack Forums as being in development in March 2023. According to the seller, its goals were to generate fast and stable replies, have unlimited characters, sidestep censorship, and have a strong focus on privacy.

Figure 6. A post on HackForums announcing the development of WormGPT in March 2023

Figure 6. A post on Hack Forums announcing the development of WormGPT in March 2023

After generating a lot of interest in the dedicated thread, it was officially released to the public in June 2023 via a promotional post on Hack Forums.

Figures 7 and 8. A post on Hackforums advertising the sale of WormGPT in July 2023

Figures 7 and 8. A post on Hackforums advertising the sale of WormGPT in July 2023

Figures 7 and 8. A post on Hack Forums advertising the sale of WormGPT in July 2023.

In the announcement post, the author detailed WormGPT’s advertised features, such as supporting "multiple AI models," memorizing conversation context, and even an upcoming code formatting feature.

The chatbot costs €100 per month or €550 per year, with even a private setup option that costs €5,000. In several posts on the original WormGPT thread, the author stated that the chatbot is not just an interface relying on a jailbreak prompt for ChatGPT, which he describes as unreliable. The author claimed that WormGPT is a fully customized system, which is why it’s free from any limitations imposed by OpenAI.

Figures 9 and 10. WormGPT author explaining on HackForums that the chatbot doesn’t rely on ChatGPT jailbreak prompts and uses a new API

Figures 9 and 10. WormGPT author explaining on HackForums that the chatbot doesn’t rely on ChatGPT jailbreak prompts and uses a new API

Figures 9 and 10. WormGPT author explaining on Hack Forums that the chatbot doesn’t rely on ChatGPT jailbreak prompts and uses a new API

In an interview with Brian Krebs, the WormGPT author shared that he trained a GPT-J 6B model, although we don't know which dataset was used for any further fine-tuning.

On August 8, 2023, the sales of the service came to a stop. According to the author, the project ended due to excessive media exposure, which resulted in bad publicity.

Figure 11. The final comment on HackForums from WormGPT's author announcing that he is stopping the WormGPT project

Figure 11. The final comment on Hack Forums from WormGPT's author announcing that he is stopping the WormGPT project.

With all its controversies, we still think that WormGPT was most likely the only example of an actual custom LLM cited on Hack Forums.

The Big LLM Bargain Sale: FraudGPT, DarkBARD, DarkBERT, DarkGPT

Since July 22, 2023, we have witnessed a steady stream of announcements related to AI services on the Telegram channel called “Cashflow Cartel.” The announcements initially just sported a list of possible AI-powered activities, such as writing malicious code, creating phishing websites, finding marketplaces, and discovering carder sites. They claim to support different AI models and to have more than 3,000 successful users, with a price set at US$90 per month. However, no specific service name was provided in the initial Telegram posts.

Since then, we’ve noticed how the advertisements pertaining to these AI services have started evolving. On July 27, 2023, we’ve seen multiple services such as DarkBARD, DarkBERT, and FraudGPT being offered online.

The post author and channel admin claim to only be service resellers. Although each AI-powered service has slightly different monthly fees, there is no clear description of the features each service provides.

ProductPrice
FraudGPT$90/month
DarkBARD$100/month
DarkBERT$110/month
DarkGPT$200 lifetime subscription

Table 1. A summary of our findings based on advertisments posted on the “Cashflow Cartel” Telegram channel

However, as of August 2023, all the original offerings have been removed from the Telegram channel and no further mention of these services has been made since.

Several elements suggest that it is very unlikely that these were legitimate new language models developed with the sole intent of supporting criminal activities:

  • As what WormGPT showed, even with a dedicated team of people, it would take months to develop just one customized language model. On top of the lengthy development time, malicious actors would need to shoulder the development costs of not one, but four different systems with various capabilities. This would include computational power-related costs and subsequent cloud computing costs required for any fine-tuning tasks.
  • Despite all the announcements, we could not find any concrete proof that these systems worked. Even for FraudGPT, the most well-known of the four LLMs, only promotional material or demo videos from the seller can be found in other forums.

Figure 12.  A post on Club2CRD forum that discusses FraudGPT

Figure 12. A post on Club2CRD forum that discusses FraudGPT

Our hypothesis is that these most likely work as wrapper services that would redirect requests to either the legitimate ChatGPT or Google BARD tools by using stolen accounts through a VPN connection and jailbreaking user prompts. To avoid confusion, there actually exists a legitimate tool called DarkBERT, which was developed by the company SW2 by training A Robustly Optimized BERT Pretraining Approach (RoBERTa) LLM with data from the dark web. However, they used this technology internally to perform tasks such as dark web page classification and breached data detection. Because SW2 does not seem to be offering DarkBERT to third-parties, the DarkBERT advertised in this post probably just shares the same name, coincidentally or otherwise.

Meanwhile, the first use of the DarkGPT moniker can be traced to a Hack Forums post from March 30, 2023, by someone claiming to be developing DarkGPT as a malicious alternative to ChatGPT. However, given the lack of follow up from the user, it could have been a different project with the same name. Most likely, the advertised DarkGPT service was a script jailbreaking ChatGPT prompts. And based on the pricing plan of the DarkGPT service on Telegram, which is a one-off payment as opposed to a monthly subscription fee, it could still be the case.

Figure 14. A post on HackForums from March 2023 discussing a tool called DarkGPT

Figure 13. A post on Hack Forums from March 2023 discussing a tool called DarkGPT

Other possibly fake LLMs: WolfGPT, XXXGPT, and Evil-GPT

The four previously discussed LLMs were sold on the same thread, suggesting that they were possibly grouped and were made by the same people. The three LLMs we’ll be discussing in this section have been observed separately on different forums and threads. These LLMs are unrelated to one another and to the LLMs discussed in the previous section.

On July 28, 2023, a new advertisement promoting a tool called “WolfGPT” was posted on the “KEP TEAM” Telegram channel and made the rounds on several other underground channels. The original post, which was in Arabic, described WolfGPT as "an ominous AI creation in Python, standing as a shadowy force, outclassing both WormGPT and ChatGPT. This malevolent tool possesses an arsenal of capabilities, empowering it to craft encrypted malware, cunningly deceptive phishing texts, and safeguard secrets with absolute confidentiality. Evolving daily with the latest technology, it wields an unrivaled potency that makes it a formidable force to reckon with. Handle with utmost caution!"

Figure 15. A post promoting WolfGPT on Telegram

Figure 14. A post promoting WolfGPT on Telegram

However, nothing else can be found about this tool, other than one screenshot and a GitHub repository of a possibly unrelated Python web app that acts as a wrapper around the ChatGPT API.

Figure 16. A screenshot of the WolfGPT user interface (UI) attached to the original Telegram post

Figure 15. A screenshot of the WolfGPT user interface (UI) attached to the original Telegram post

On July 29, 2023, just one day after WolfGPT was promoted on Telegram, a malicious actor with the username “XXXGPT” posted about "a revolutionary service that offers personalized bot AI customization, backed by a dedicated team of five experts […] with no censorship or restrictions" on the XSS.IS forum. Other than the initial XXXGPT post, there’s no other post pertaining to the XXXGPT service. There are also no user feedback, questions, or effectiveness testimonials about the tool. Based on the initial XXXGPT post, the service costs US$90 for a one-month license.

Figure 17 and 18. The advertisement post for XXXGPT on XSS.IS

Figure 17 and 18. The advertisement post for XXXGPT on XSS.IS

Figure 16 and 17. The advertisement post for XXXGPT on XSS.IS

Finally, on August 9, 2023, another tool called "Evil-GPT" was advertised on BreachForums. The author claims that it’s an alternative to WormGPT, is written in Python, and costs US$10.

Figure 19. Evil-GPT advertisement post on BreachForums

Figure 18. Evil-GPT advertisement post on BreachForums

Figure 20. An excerpt of an exchange about Evil-GPT in the original discussion thread on BreachForums

Figure 19. An excerpt of an exchange about Evil-GPT in the original discussion thread on BreachForums

Based on the feedback from a forum user who claims to have tested Evil-GPT, it seems to require an OpenAI key to work, which suggests that as opposed to being a full custom LLM like WormGPT was, it is simply a wrapper tool around ChatGPT via an API. Like one user on the discussion board points out, this does not make it necessarily a useless tool for criminals: If the tool offered access to an OpenAI key that’s not linked to the criminal’s own OpenAI account, provided the latest working jailbreak prompts, and eventually worked using a VPN, it might just be a bargain for US$10.

Deepfake services for criminals

Another area where criminals are turning to AI is deepfakes. The term “deepfake” refers to the replacement of a video image of a person with someone else’s to yield a false depiction of the victim. These fake videos can be used for extortion, give credence to false news, or improve the believability of any social engineering trick. We analyzed deepfakes in our 2020 report, “Malicious Uses and Abuses of Artificial Intelligence.”

Over the last two years, we have been hearing about how deepfake technology has been improving. Today, generating deepfake with an acceptable level for common use can be done relatively quickly with the right computing setup. However, getting something that is indistinguishable from reality can take many hours of professional postprocessing.

The typical way of producing deepfakes involves the use of Generative Adversarial Networks (GANs). This pits the AI that generates the video against another AI that attempts to detect whether the video is fake. The generative AI keeps modifying the video until it’s no longer flagged as false by the detecting AI engine anymore. This process involves a lot of work, and even then, it doesn’t guarantee that the final product is good enough to fool a human.

In an article from April 2023 on this topic, Intel471 researchers “found video deepfake products on underground forums to be immature at this point." They surveyed several cybercriminal offerings for these products, and they all fell short. Some were priced at around US$500, but the most expensive one was several times higher than the typical price. According to the report, at all price points, the samples provided were not completely convincing.

Very convincing deepfakes have been shown in public, and even more so in the case of movies, but an effort is still needed to move from very realistic to completely convincing. The creation process is labor-intensive and expensive, and the ROI needed to jump over that gap is missing a lucrative enough repeatable business model for criminals. This is the missing piece right now: Criminals can’t spend the money needed to produce an exceptionally high-quality deepfake video because there is no real business model that can pay for it.

It is, however, more likely that criminals would start ramping up the creation and use of audio deepfakes to strengthen any social engineering attacks. Audio deepfakes are a lot easier to produce to a convincing level. The criminal still needs enough voice samples of the victim, but in the case of CEO scams, for instance, this should not be a great obstacle. We expect audio deepfakes to take off a lot sooner than video deepfakes.

An overall assessment of the use of criminal AI

Criminal discussions about AI do not revolve around the creation of new AI systems to aid malicious actors in their criminal deeds. We did not find any meaningful criminal discussion around topics like adversarial AI, Llama, Alpaca, GPT-3, GPT-4, Falcon, or BERT. The same thing goes for terms like Midjourney, Dall-E, or Stable Diffusion. Developing LLMs from scratch doesn’t seem to be the main interest of the community (the latter WormGPT notwithstanding) and neither is using generative image AI engines.

We reckon that there is no real need for criminals to develop a separate LLM like ChatGPT because ChatGPT is already working well enough for their needs. It’s clear that criminals are using AI in the same way as everyone else is using it, and ChatGPT has the same impact among cybercriminals as it has among legitimate coders.

One thing we can derive from this is that the bar to become a cybercriminal has been tremendously lowered. Anyone with a broken moral compass can start creating malware without coding know-how. To quote a tweet from Andrej Karpathy, the Director of Artificial Intelligence and Autopilot Vision at Tesla, “The hottest new programming language is English.”

The same thing can be said when it comes to crafting credible social engineering ploys in languages the attacker doesn’t speak. Both coding and speaking languages used to be barriers in becoming a successful cybercriminal, but today, they no longer are.

As an added bonus, seasoned malware writers can now improve their code with next to no effort. Similarly, they can also start developing malware in computer languages that they may not be very proficient at; they wouldn’t need to be well-versed with all the computer languages because ChatGPT is familiar with all of them.

Given the high demand for using ChatGPT for malicious purposes, it’s no wonder that criminals are very keen on jailbreaking their way to an uncensored chatbot interface. It looks like the future of this is to offer other criminals a service that wraps the ChatGPT interface into a command prompt that is already jailbroken. Possibly, they would add to this service a VPN and perhaps an anonymous OpenAI user account. We are seeing glimpses of this in the current criminal offerings, but whether they deliver on their promises or not remains to be seen.

Also, regular users seem to be willing to pay for really good ChatGPT prompts. This has led some to claim that in the future, there might be so-called “prompt engineers” (We reserve our judgment on this prediction). Perhaps criminal ChatGPT prompt creation services will start becoming a viable business in the criminal underground, too. A tool that could offer a menu of tried-and-trusted prompts for developing excellent phishing templates or malware code of various types would surely be a popular one. This would allow one criminal group to specialize in maintaining and curating prompts, while the entire criminal userbase benefits from it.

Finally, deepfake creation services are also being offered in the cybercriminal underground, but so far, their quality falls short of what’s needed to fully fool a victim — at least for criminal purposes. Maybe we will see more believable products in the future, but in the meantime, audio deepfakes are more likely to be used than video in fake news, targeted scams, and CEO fraud.

Conclusion

While AI is a massive topic of discussion today, it’s worth noting that ChatGPT is a recent tool — even though we’re talking about the pre-chatbot era as ancient history. The pace of change is so fast that we fully expect our assessment to be dated in three months’ time.

AI is still in its early days in the criminal underground. The advancements we are seeing are not groundbreaking; in fact, they are moving at the same pace as in every other industry. Bringing AI to cybercrime is a major accelerant and it lowers the barrier of entry significantly. Also, because it’s the criminal underground, scam advertisements about AI tools are likely to be as common as those selling legitimate tools. Perhaps criminals can ask ChatGPT for help when it comes to sorting good AI services from the bad.

HIDE

Like it? Add this infographic to your site:
1. Click on the box below.   2. Press Ctrl+A to select all.   3. Press Ctrl+C to copy.   4. Paste the code into your page (Ctrl+V).

Image will appear the same size as you see above.