Group-IB, a Singapore-based global cybersecurity company, has identified an alarming trend in the illicit trade of compromised credentials for OpenAI’s ChatGPT on dark web marketplaces. The firm found over 100,000 malware-infected devices with saved ChatGPT credentials within the past year.
Reportedly, the Asia-Pacific region saw the highest concentration of stolen ChatGPT accounts, making up over 40 percent of the cases. According to Group-IB, the cybercrime was perpetrated by bad actors using Raccoon Infostealer, a particular type of malware that collects stored information from infected computers.
ChatGPT and a need for cybersecurity
Earlier in June 2023, OpenAI, the developer of ChatGPT, pledged $1 million towards AI cybersecurity initiatives following an unsealed indictment from the Department of Justice against 26-year-old Ukrainian national Mark Sokolovsky for his alleged involvement with Raccoon Infostealer. From there, awareness of the effects of Infostealer has continued to spread.
Notably, this type of malware collects a vast array of personal data, from browser-saved credentials, bank card details, and crypto wallet information, to browsing history and cookies. Once collected, the data is forwarded to the malware operator. Infostealers typically propagate through phishing emails and are alarmingly effective due to their simplicity.
Over the past year, ChatGPT has emerged as a significantly powerful and influential tool, especially among those within the blockchain industry and Web3. It’s been used throughout the metaverse for a variety of purposes — like, say, creating a $50 million meme coin. Although OpenAI’s now iconic advent may have taken the tech world by storm, it has also become a lucrative target for cybercriminals.
Recognizing this growing cyber risk, Group-IB advises ChatGPT users to strengthen their account security by regularly updating passwords and enabling two-factor authentication (2FA). These measures have become increasingly popular as cybercrime continues to rise and simply require users to enter an additional verification code alongside their password to access their accounts.
“Many enterprises are integrating ChatGPT into their operational flow. Employees enter classified correspondences or use the bot to optimize proprietary code,” Dmitry Shestakov, Group-IB’s Head of Threat Intelligence, said in a press release. “Given that ChatGPT’s standard configuration retains all conversations, this could inadvertently offer a trove of sensitive intelligence to threat actors if they obtain account credentials.”
Shestakov went on to note that his team continuously monitors underground communities in the interest of being able to promptly identify hacks and leaks to help mitigate cyber risks before further damage occurs. Yet, regular security awareness training and vigilance against phishing attempts are still recommended as additional protective measures.
The evolving landscape of cyber threats underscores the importance of proactive and comprehensive cybersecurity measures. From ethical questions to questionable Web3 integrations, as the usage of AI-powered tools like ChatGPT continues to grow, so does the necessity of securing these technologies against potential cyber threats.
Editor’s note: This article was written by an nft now staff member in collaboration with OpenAI’s GPT-4.
If this article, video or photo intrigues any copyright, please indicate it to the author’s email or in the comment box.