Over 100,000 ChatGPT accounts stolen via info-stealing malware
More than 101,000 ChatGPT user accounts have been stolen by information-stealing malware over the past year, according to dark web marketplace data.
Cyberintelligence firm Group-IB reports having identified over a hundred thousand info-stealer logs on various underground websites containing ChatGPT accounts, with the peak observed in May 2023, when threat actors posted 26,800 new ChatGPT credential pairs.
Regarding the most targeted region, Asia-Pacific had almost 41,000 compromised accounts between June 2022 and May 2023, Europe had nearly 17,000, and North America ranked fifth with 4,700.
Information stealers are a malware category that targets account data stored on applications such as email clients, web browsers, instant messengers, gaming services, cryptocurrency wallets, and others.
These types of malware are known to steal credentials saved to web browsers by extracting them from the program’s SQLite database and abusing the CryptProtectData function to reverse the encryption of the stored secrets.
These credentials, and other stolen data, are then packaged into archives, called logs, and sent back to the attackers’ servers for retrieval.
ChatGPT accounts, alongside email accounts, credit card data, cryptocurrency wallet information, and other more traditionally targeted data types, signify the rising importance of AI-powered tools for users and businesses.
Because ChatGPT allows users to store conversations, accessing one’s account might mean gaining insights into proprietary information, internal business strategies, personal communications, software code, and more.
“Many enterprises are integrating ChatGPT into their operational flow,” comments Group-IB’s Dmitry Shestakov.
“Employees enter classified correspondences or use the bot to optimize proprietary code. Given that ChatGPT’s standard configuration retains all conversations, this could inadvertently offer a trove of sensitive intelligence to threat actors if they obtain account credentials.”
It is due to these concerns that tech giants like Samsung have outright banned staff from using ChatGPT on work computers, going as far as threatening to terminate the employment of those who fail to follow the policy.
Group-IB’s data indicates that the number of stolen ChatGPT logs has grown steadily over time, with almost 80% of all logs coming from the Raccoon stealer, followed by Vidar (13%) and Redline (7%).
If you input sensitive data on ChatGPT, consider disabling the chat saving feature from the platform’s settings menu or manually delete those conversations as soon as you are done using the tool.
However, it should be noted that many information stealers snap screenshots of the infected system or perform keylogging, so even if you do not save conversations to your ChatGPT account, the malware infection could still lead to a data leak.
Unfortunately, ChatGPT has already suffered a data breach where users saw other users’ personal information and chat queries.
Therefore, those working with extremely sensitive information shouldn’t trust inputting it on any cloud-based services, but only on secured locally-built and self-hosted tools.
A considerable amount of time and effort goes into maintaining this website, creating backend automation and creating new features and content for you to make actionable intelligence decisions. Everyone that supports the site helps enable new functionality.
If you like the site, please support us on “Patreon” or “Buy Me A Coffee” using the buttons below
To keep up to date follow us on the below channels.