NIST has published a report on adversarial machine learning attacks and mitigations, and cautioned that there is no silver bullet for these types of threats. Adversarial machine learning, or AML, involves extracting information about the characteristics and behavior of a machine learning system, and manipulating inputs in order to obtain a desired outcome. NIST’s report focuses on four main types of attacks: evasion, poisoning, privacy, and abuse. Joseph Thacker, principal AI engineer and security researcher at SaaS security firm AppOmni, commented on the new NIST report, describing it as “the best AI security publication” he has seen.
Read more: https://www.securityweek.com/nist-no-silver-bullet-against-adversarial-machine-learning-attacks/