The promised AI revolution has arrived. OpenAI’s ChatGPT set a new record for the fastest-growing user base and the wave of generative AI has extended to other platforms, creating a massive shift in the technology world. It’s also dramatically changing the threat landscape — and we’re starting to see some of these risks come to fruition. Attackers are using AI to improve phishing and fraud. Meta’s 65-billion parameter language model got leaked, which will undoubtedly lead to new and improved phishing attacks. We see new prompt injection attacks on a daily basis. Users are often putting business-sensitive data into AI/ML-based services, leaving security teams scrambling to support and control the use of these services. For example, Samsung engineers put proprietary code into ChatGPT to get help debugging it, leaking sensitive data. A survey by Fishbowl showed that 68% of people who are using ChatGPT for work aren’t telling their bosses about it. Misuse of AI is increasingly on the minds of consumers, businesses, and even the government. The White House announced new investments in AI research and forthcoming public assessments and policies. The AI revolution is moving fast and has created four major classes of issues.
Full story : How generative AI is creating new classes of security threats.