The recent rise in attention focused on generative AI such as ChatGPT, Dall-E, and other natural language processing AI models have raised some concerns. Although the technology has widely increase the ease of use and accuracy of AI and made it more accessible to the public, security researchers are concerned that the platforms will be used to develop malicious exploits and more sophisticated and effective cyberattacks. Generative AI tools could have the potential to alter the way that cyber threats are executed and developed, meaning that security researchers will have to shift tactics. Platforms such as ChatGPT could be used to create convincing phishing emails, social engineering attacks, and other malicious content.
According to security researchers, the tools can also be used to write exploits and malicious code. Although many of them contain measures to prevent their platforms from being used for nefarious purposes, they are easily circumvented. The potential impacts of generative AI on cybersecurity include the risk of threat actors being able to render human-like copy and live dialog via chat and voice mannerisms for phishing phone calls.