According to a report by Singapore-based cybersecurity firm Group-IB, over 100,000 login credentials for OpenAI's AI-powered chatbot ChatGPT were stolen and leaked onto the dark web. The Raccoon Infostealer malware was used to steal the credentials, which is activated when a potential victim clicks on a phishing email or receives fraudulent communications on social media or via text messages. The malware collects important login credentials, history, cookies, and even cryptocurrency wallet information from web browsers. Between June 2022 and May 2023, more than 101,000 devices containing compromised logins for the popular AI-powered chatbot were discovered on dark web marketplaces. The majority of the 41,000 stolen credentials belonged to users from the Asia-Pacific region. Info stealers have emerged as a major source of compromised personal data due to their simplicity and effectiveness.
100,000 ChatGPT logins were stolen and leaked on the dark web. OpenAI is Not to Blame For the Leak of User Credentials
According to Shestakov, it seems that the ChatGPT accounts that used a "direct authentication method" were the most targeted by cybercriminals. However, Shestakov made it clear that OpenAI is not responsible for the security breach. 100,000 ChatGPT logins were stolen and leaked on the dark web.
The research also highlighted the potential danger of using ChatGPT for work, as user queries and chat history are saved by default, thus making it easier for cybercriminals to access confidential information that can be used to launch targeted attacks against companies and their employees.
The report revealed that cybercriminals sold nearly 27,000 ChatGPT logins that they had stolen online in May of this year. To prevent unauthorized access, Group-IB advises ChatGPT users to update their passwords regularly and use two-factor authentication to enhance the security of their accounts.
Samsung Bans Generative AI After Internal Data Leak
Samsung recently implemented a temporary ban on the use of generative AI tools, such as ChatGPT and Google Bard, on company-owned devices due to alleged incidents of sensitive data being uploaded onto ChatGPT by some staff members. The ban extends to platforms like Microsoft Bing, which also utilizes ChatGPT.
According to data collected by Statista, there were 319 cases of sensitive data leakage on ChatGPT per 100,000 employees from April 9 to 15, 2021. This figure represents an increase of approximately 60% compared to observations between February and March 2021. The second-most common type of confidential data shared on ChatGPT was source code, with 278 cases per 100,000 employees.
China's payment and clearing industry association, which is governed by the country's central bank, has issued a warning regarding potential risks associated with uploading confidential documents to AI tools such as OpenAI's ChatGPT. The association has cited concerns such as cross-border data leaks. In light of this, payment industry staff have been advised to refrain from uploading sensitive information related to the country, financial industry, and their respective companies, including customer information and codes in the payment and settlement infrastructure into AI chatbots.
Disclaimer: This article is provided for informational purposes only. It is not offered or intended to be used as legal, tax, investment, financial, or other advice.