“How do you stop a Terminator scenario before it starts? Real US robots won’t take over like the fictional SkyNet, Pentagon officials promise, because a human being will always be ‘in the loop,’ possessing the final say on whether or not to use lethal force.
But by the time the decision comes before that human operator, it’s probably too late, warns Richard Danzig. In a new report, the respected ex-Navy Secretary argues that we need to design in safeguards from the start…The SkyNet scenario — where a military artificial intelligence turns hostile — is just one extreme case. Far more likely, Danzig argues, is simple error: human error, machine error and both kinds compounding the other. ‘Error is as important as malevolence,’ Danzig told me in an interview. ‘I probably wouldn’t use the word ‘stupidity,’ (because) the people who make these mistakes are frequently quite smart, (but) it’s so complex and the technologies are so opaque that there’s a limit to our understanding.’”