OpenAI credentials stolen by the thousands for sale on the dark web

ChatGPT malicious alternative advertised on the dark web

Threat actors are showing an increased interest in generative artificial intelligence tools, with hundreds of thousands of OpenAI credentials for sale on the dark web and access to a malicious alternative for ChatGPT.

Both less skilled and seasoned cybercriminals can use the tools to create more convincing phishing emails that are customized for the intended audience to grow the chances of a successful attack.

Hackers tapping into GPT AI

In six months, the users of the dark web and Telegram mentioned ChatGPT, OpenAI’s artificial intelligence chatbot, more than 27,000 times, shows data from Flare, a threat exposure management company, shared with BleepingComputer.

Analyzing dark web forums and marketplaces, Flare researchers noticed that OpenAI credentials are among the latest commodities available.

The researchers identified more than 200,000 OpenAI credentials for sale on the dark web in the form of stealer logs.

Compared to the estimated 100 million active users in January, the count seems insignificant but it does show that threat actors see in generative AI tools some potential for malicious activity.

A report in June from cybersecurity company Group-IB said that illicit marketplaces on the dark web traded logs from info-stealing malware containing more than 100,000 ChatGPT accounts.

Cybercriminals’ interest in these utilities has been piqued to the point that one of them developed a ChatGPT clone named WormGPT and trained it on malware-focused data.

The tool is advertised as the “best GPT alternative for blackhat” and a ChatGPT alternative “that lets you do all sorts of illegal stuff.”

WormGPT advertised on cybercriminal forum
WormGPT dev promoted tool on cybercriminal forum
source: SlashNext

WormGPT relies on the GPT-J open-source large language model developed in 2021 to produce human-like text. Its developer says that they trained the tool on a diverse set of data, with a focus on malware-related data but provided no hint about the specific datasets.

WormGPT shows potential for BEC attacks

Email security provider SlashNext was able to gain access to WormGPT and carried out a few tests to determine the potential danger it poses.

The researchers’ focus was on creating messages suitable for business email compromise (BEC) attacks.

“In one experiment, we instructed WormGPT to generate an email intended to pressure an unsuspecting account manager into paying a fraudulent invoice,” the researchers explain.

“The results were unsettling. WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing and BEC attacks,” they concluded.

WormGPT creates believable message for BEC attack
WormGPT generates persuasive message for BEC attack
source: SlashNext

Analyzing the result, SlashNext researchers identified the advantages that generative AI can bring to a BEC attack: apart from the “impeccable grammar” that grants legitimacy to the message, it can also enable less skilled attackers to carry out attacks above their level of sophistication.

Although defending against this emerging threat may be difficult, companies can prepare by training employees how to verify messages claiming urgent attention, especially when a financial component is present.

Improving email verification processes should also pay off with alerts for messages outside the organization or by flagging keywords typically associated with a BEC attack.


Original Source



A considerable amount of time and effort goes into maintaining this website, creating backend automation and creating new features and content for you to make actionable intelligence decisions. Everyone that supports the site helps enable new functionality.

If you like the site, please support us on “Patreon” or “Buy Me A Coffee” using the buttons below

 To keep up to date follow us on the below channels.