CyberGlobal Risk

Why A ‘Human In The Loop’ Can’t Control AI

“How do you stop a Terminator scenario before it starts? Real US robots won’t take over like the fictional SkyNet, Pentagon officials promise, because a human being will always be ‘in the loop,’ possessing the final say on whether or not to use lethal force.

But by the time the decision comes before that human operator, it’s probably too late, warns Richard Danzig. In a new report, the respected ex-Navy Secretary argues that we need to design in safeguards from the start…The SkyNet scenario — where a military artificial intelligence turns hostile — is just one extreme case. Far more likely, Danzig argues, is simple error: human error, machine error and both kinds compounding the other. ‘Error is as important as malevolence,’ Danzig told me in an interview. ‘I probably wouldn’t use the word ‘stupidity,’ (because) the people who make these mistakes are frequently quite smart, (but) it’s so complex and the technologies are so opaque that there’s a limit to our understanding.’”

Source: Why A ‘Human In The Loop’ Can’t Control AI: Richard Danzig « Breaking Defense – Defense industry news, analysis and commentary

OODA Analyst

OODA Analyst

OODA is comprised of a unique team of international experts capable of providing advanced intelligence and analysis, strategy and planning support, risk and threat management, training, decision support, crisis response, and security services to global corporations and governments.