In the wake of WormGPT, a ChatGPT clone trained on malware-focused data, a new generative artificial intelligence hacking tool called FraudGPT has emerged, and at least another one is under development that is allegedly based on Google’s AI experiment, Bard. Both AI-powered bots are the work of the same individual, who appears to be deep in the game of providing chatbots trained specifically for malicious purposes ranging from phishing and social engineering, to exploiting vulnerabilities and creating malware. FraudGPT came out on July 25 and has been advertised on various hacker forums by someone with the username CanadianKingpin12, who says the tool is intended for fraudsters, hackers, and spammers. An investigation from researchers at cybersecurity company SlashNext, reveals that CanadianKingpin12 is actively training new chatbots using unrestricted data sets sourced from the dark web or basing them on sophisticated large language models developed for fighting cybercrime.
Full report : Cybercriminals train AI chatbots for phishing, malware attacks.