Consider a sudden increase in sophisticated malware attacks, advanced persistent threats (APTs), and organizational data breaches. Upon investigation, it is discovered that these attacks are crafted by cybercriminals who have been empowered with generative AI. Who should be held accountable? The cybercriminals themselves? The generative AI bots? The organizations that created these bots? Or perhaps the government that lacks regulation and accountability? Generative AI technology is a form of artificial intelligence that can generate texts, images, sounds, and other content based on natural language instructions or data inputs. AI-powered chatbots such as ChatGPT, Google Bard, Perplexity, and others are accessible to anyone who wants to chat, generate human-like text, create scripts, and even write complex code. However, a common problem with these chatbots is that they can produce inappropriate or harmful content based on user input, which may violate ethical standards, cause damage, or even constitute criminal offenses. Therefore, these chatbots have onboard security mechanisms and content filters intended to ensure their output is within ethical boundaries and does not produce harmful or malicious content. But how effective are these defensive content moderation measures, and how much do they align with cyber defense?
Full commentry : The Growing Cyber Threats of Generative AI: Who’s Accountable?