Expert and novice cybercriminals have already started to use OpenAI’s chatbot ChatGPT in a bid to build hacking tools, security analysts have said. In one documented example, the Israeli security company Check Point spotted(Opens in a new window) a thread on a popular underground hacking forum by a hacker who said he was experimenting with the popular AI chatbot to “recreate malware strains.” The hacker had gone on to compress and share Android malware that had been written by ChatGPT across the web. The malware had the ability to steal files of interest, Forbes reports(Opens in a new window). The same hacker showed off a further tool that installed a backdoor on a computer and could infect a PC with more malware. Check Point noted in its assessment(Opens in a new window) of the situation that some hackers were using ChatGPT to create their first scripts. In the aforementioned forum, another user shared Python code he said could encrypt files and had been written using ChatGPT. The code, he said, was the first such one he had written. While such code could be used for harmless reasons, Check Point said that it could “easily be modified to encrypt someone’s machine completely without any user interaction.” The security company stressed that while ChatGPT-coded hacking tools appeared “pretty basic,” it is “only a matter of time until more sophisticated threat actors enhance the way they use AI-based tools for bad.”
Read more : Cybercriminals Using ChatGPT to Build Hacking Tools, Write Code.