The rapid emergence of Open AI’s ChatGPT has been one of the biggest stories of the year, with the potential impact of generative AI chatbots and large language models (LLMs) on cybersecurity a key area of discussion. There’s been a lot of chatter about the security risks these new technologies could introduce — from concerns about sharing sensitive business information with advanced self-learning algorithms to malicious actors using them to significantly enhance attacks. Some countries, US states and enterprises have ordered bans on the use of generative AI technology such as ChatGPT on data security, protection, and privacy grounds. Clearly, the security risks introduced by generative AI chatbots and large LLMs are considerable. However, generative AI chatbots can enhance cybersecurity for businesses in multiple ways, giving security teams a much-needed boost in the fight against cybercriminal activity. Generative AI models can be used to significantly enhance the scanning and filtering of security vulnerabilities, according to a Cloud Security Alliance (CSA) report exploring the cybersecurity implications of LLMs. In the paper, CSA demonstrated that OpenAI’s Codex API is an effective vulnerability scanner for programming languages such as C, C#, Java, and JavaScript.
Full story : 6 ways generative AI chatbots and LLMs can enhance cybersecurity.