News BriefsTechnology

Vulnerabilities May Slow Air Force’s Adoption of Artificial Intelligence

The Air Force is reportedly working to better prepare to defend AI programs and algorithms from adversaries that may seek to disrupt or corrupt training data. On Wednesday, Air Force deputy chief of staff for intelligence, surveillance, reconnaissance, and cyber effects operations Mary O’Brien spoke about the obstacle, stating that while deployed AI is still in its infancy, the Air Force needs to effectively prepare for the possibility of adversaries using the service’s own tools against them. The assumption that the deployment of AI is risk-free from a cybersecurity standpoint could be costly to the Air Force in future operations.

Strategizing around adversarial use of the US’s own AI tools is part of a new emerging subcategory in AI called AI safety that seeks to ensure that AI programs both work as expected and are safe from attacks in terms of design and computer architecture. However, current DoD efforts in this area remain small.

Read More: Vulnerabilities May Slow Air Force’s Adoption of Artificial Intelligence

OODA Analyst

OODA Analyst

OODA is comprised of a unique team of international experts capable of providing advanced intelligence and analysis, strategy and planning support, risk and threat management, training, decision support, crisis response, and security services to global corporations and governments.