The Pentagon Wants to Stop Enemies From ‘Poisoning’ AI
Through a “Guaranteeing AI Robustness against Deception” program, the Pentagon’s Defense Advanced Research Projects Agency is working to guard against malicious manipulation of A.I. systems. In its project solicitation, DARPA writes that “the growing sophistication and ubiquity of ML [Machine Learning] components in advanced systems dramatically…increases opportunities for new, potentially unidentified vulnerabilities…the field now appears increasingly pessimistic, sensing that developing effective ML defenses may prove significantly more difficult than designing new attacks, leaving advanced systems vulnerable and exposed.” Through inference attacks, adversaries can work to identify what kind of information or data is driving machine learning, and then work to manipulate that data through a poisoning attack, essentially feeding the learning algorithm false data. The winning bids on this project will be responsible for assessing vulnerabilities to machine learning and exploring defense options.