The hype around OpenAI’s generative AI chatbot ChatGPT had swept the world earlier this year. Launched in November 2022, the large language model-based chatbot quickly garnered the attention of people, with 100 million users in its first two months after the launch. However, while the power of ChatGPT has been demonstrated time and again in writing content, solving questions and helping engineers code, a dangerous offshoot of the famous tool has now surfaced on the dark web.
First reported by Rakesh Krishnan of cyber security firm Netenrich, the new variant of ChatGPT is called FraudGPT. According to the firm, the tool has surfaced on the dark web and is circulating on several Telegram channels since July 22. FraudGPT is used for cyber criminal activities, which include phishing attacks, drafting scam emails, creating cracking tools, etc.
As per the screenshots attached in the Netenrich report, the subscription for the dangerous tool is being sold on the dark web and Telegram. For a one month subscription, users need to pay $200 while annual subscription is being sold for $1,700.
According to one of the screenshots shared by the firm, the tagline of the new tool is: “Bot without limitations, rules, boundaries”. The bot has reportedly been designed for fraudsters, hackers, spammers and like-minded individuals. The list of activities that the tool allegedly supports includes writing malicious codes and creating hacking tools, phishing pages and undetectable malware.
Netenrich cited several examples of how the tool helps in cyber criminal activities. A cyber criminal can easily draft emails with malicious links or create scam pages to lure users to enter their confidential details. As per one of the messages cited by the firm, there have been over 3,000 sales of the tool so far.
It is not the first time a dangerous tool based on GPT architecture has emerged online. Few weeks back, cyber security firm SlashNext had posted about the emergence of WormGPT, a tool which was created to aid the activities of cyber criminals. It allegedly helps in business email compromise (BEC) attacks in which fraudsters pretend to be someone else to dupe customers of a business.
Also Read | What Is ChaosGPT, The AI Tool Threatening To Destroy Humanity?
As several tech giants like Google, Microsoft, Amazon and Meta work on their own AI models, the emergence of tools like these points to the threats posed by the technology. United States government had announced on July 21 that seven tech companies, which include Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI, have agreed to voluntarily implement measures to safeguard the technology. These include adding watermarks which would help users identify AI-generated material and testing the technology for security risks.