Machine learning (ML) and artificial intelligence (AI) are rapidly unlocking new business opportunities and influencing every industry. That said, AI comes with its own set of risks and threat actors are known to employ a range of novel techniques to exploit weaknesses in AI models, security standards and processes, to dangerous effect. Organizations keen on leveraging AI must be aware of these risks so they can build a more robust AI that is resilient to cyberattacks. The relatively new MITRE security framework dubbed “ATLAS” (short for Adversarial Threat Landscape for AI Systems) describes a number of tactics and techniques cybercriminals can use to exploit AI. Machine learning and AI models work by analyzing vast amounts of data to learn patterns and then make predictions or decisions. This underlying mechanism creates opportunities for innovative attacks on AI and ML-based systems. If attackers inject malicious training data, ML models learn incorrect information and subsequently make faulty, fraudulent or malicious predictions. For instance, an attacker could poison credit card fraud detection algorithms with fake transactions, resulting in the AI-powered scanner ignoring or bypassing fraudulent transactions. Evasion attacks are a method where cybercriminals fool or circumvent AI systems using vulnerabilities in the model’s algorithm or detection mechanism. If bad actors somehow gain an understanding of how an AI model works and the functions it uses to reach a decision, they can easily evade or fool these features. Attackers can simply evade AI-based facial recognition systems by donning a T-shirt with a face on it.
Full report : AI Models Under Attack: Protecting Your Business From AI Cyberthreats.