Last week, Andy Grotto and I published a new working paper on policy responses to the risk that artificial intelligence (AI) systems, especially those dependent on machine learning (ML), can be vulnerable to intentional attack. As the National Security Commission on Artificial Intelligence found, “While we are on the front edge of this phenomenon, commercial firms and researchers have documented attacks that involve evasion, data poisoning, model replication, and exploiting traditional software flaws to deceive, manipulate, compromise, and render AI systems ineffective.”
Full story : Managing the Cybersecurity Vulnerabilities of Artificial Intelligence.