What’s at stake when corporations don’t put strategies in place to protect their employees and customers? Everything, says Juan Rivera, senior solutions engineer at Telesign. “From a regulatory standpoint, recently Meta was slapped with a $1.3 billion fine by the European Union for violating data privacy – and they were just used as an example for companies that cannot afford a $1.3 billion fine,” Rivera explains. “There’s financial loss, as well as potentially huge reputational loss when both customer and employee trust is damaged. Most companies don’t have the flexibility or luxury to manage these kinds of losses.” In other words, it’s incredibly expensive on every side if corporations fail to put safety practices in place. The most current cybercriminal schemes are not new at all — fraudsters have been using these tactics for years, but now they’re backed by generative AI. Phishing emails that trick victims into revealing login credentials or sensitive information are created with convincing ChatGPT scripts. Data breaches that bypass safety checks are made possible by tricking generative AI into writing malicious code that reveals the chat history of active users, personally identifiable information like names, email addresses, payment addresses, and even the last four digits and expiration data of credit cards.
Full story : The next wave of cyber threats: Defending your company against cybercriminals empowered by generative AI.