What the Pentagon’s new AI strategy means for cybersecurity
On February 12, the US Department of Defense (DoD) released its new artificial intelligence (AI) strategy, which outlines plans for the integration of machine learning in the military’s defensive cyber operations. The strategy mentions that “in order to ensure DoD AI systems are safe, secure, and robust,” the pentagon “will fund research into AI systems that have a lower risk of accidents; are more resilient, including to hacking and adversarial spoofing.” Priority is also given to research on the interactions between AI systems.
The strategy furthermore points out that the US far from alone in its recognition of machine learning as crucial technology: “Other nations, particularly China and Russia, are making significant investments in AI for military purposes, including in applications that raise questions regarding international norms and human rights,” the document reads. “The costs of not implementing this strategy are clear. Failure to adopt AI will result in legacy systems irrelevant to the defense of our people, eroding cohesion among allies and partners, reduced access to markets that will contribute to a decline in our prosperity and standard of living, and growing challenges to societies that have been built upon individual freedoms.”