AI Will Test American Values In The Battlefield
Google’s AlphaZero win against Stockfish in chess was a watershed event in Artificial Intelligence (AI). For the first time in history a computer used intuition, not brute force, to win a game. The more profound and disturbing aspect of AlphaZero’s win against Stockfish was how it treated its own pieces as expendable assets. As Machine Learning-based AI systems move into the battlefield the question for Pentagon strategists and Congress is everyone an expendable asset?
In the past computer chess programs like Deep Blue condensed data from the best human players. Not surprisingly, Deep Blue and its followers like Stockfish exhibited human values when playing chess. For example, humans value all their pieces and so do the computer programs that encapsulate their knowledge. However AlphaZero is different.
AlphaZero is the first general purpose, intuition-based AI system (referred to as Type B). Unlike its Type A ancestors AlphaZero has no notion of human values. Instead it uses reinforcement learning to get progressively better at supercomputer speed. Programmers behind AlphaZero hoped to create a better version of Stockfish. What they got was Dr. Hannibal Lecter.
In AlphaZero’s 2017 and 2018 match against Stockfish it readily gave up its own pawns to trick Stockfish into an unwinnable board position. The sacrificing of pawns was used in every game that it played. Youtube is filled with chess analyst describing AlphaZero as brutal. If AlphaZero were human you’d probably worry about being alone with it.
At present there is a huge rush to deploy Type B AI systems in battlefield applications ranging from data analysis to weapons system management. Given our first glimpse of what Type B AI system is capable of here’s a few items the Pentagon and Congress should consider:
Mandate System Partitioning: It’s very tempting to connect data analysis and weapons system management using a Type B AI system to create the perfect OODA Loop. Unfortunately the speed by which an AlphaZero type system can launch attacks is frightening. Thus analytics and weapons control systems should have a triple layer of network, authentication and cryptographic separation between them.
Require Human Override: The gut reaction to an AI system going “postal” is to mandate an “Off Button”. However in the battlefield that is not an option as the enemy would only need to confuse an AI system with bad data to shut it down. A more practical approach is to require a human override mode incase Hannibal is having a bad day.
Ban Machina-based Killing: The ability to instantly acquire a target and kill it is the dream of every military commander. Without a doubt military leaders in China and the Middle East will adopt machine-based killing. But winning a battle at all costs is not American. To be American means to consider the moral dimension of every action. It means giving the enemy (no matter how hated) the option to put down their weapon as an act of Grace. Congress must ban machine-based killing and decree that only a human can kill.
US Armed Forces is unlike any other. It places a huge emphasis on the life of the individual warfighter. Even the highest General shows respect to the lowest warfighter. In movies, the American soldier is the hero. However in the world of Type B AI the American soldier, our hero, is an expendable asset. We must never let that happen.