The Air Force is reportedly working to better prepare to defend AI programs and algorithms from adversaries that may seek to disrupt or corrupt training data. On Wednesday, Air Force deputy chief of staff for intelligence, surveillance, reconnaissance, and cyber effects operations Mary O’Brien spoke about the obstacle, stating that while deployed AI is still in its infancy, the Air Force needs to effectively prepare for the possibility of adversaries using the service’s own tools against them. The assumption that the deployment of AI is risk-free from a cybersecurity standpoint could be costly to the Air Force in future operations.
Strategizing around adversarial use of the US’s own AI tools is part of a new emerging subcategory in AI called AI safety that seeks to ensure that AI programs both work as expected and are safe from attacks in terms of design and computer architecture. However, current DoD efforts in this area remain small.