Many artificial intelligence researchers see the possible future development of superhuman AI as having a non-trivial chance of causing human extinction – but there is also widespread disagreement and uncertainty about such risks. Those findings come from a survey of 2700 AI researchers who have recently published work at six of the top AI conferences – the largest such survey to date. The survey asked participants to share their thoughts on possible timelines for future AI technological milestones, as well as the good or bad societal consequences of those achievements. Almost 58 per cent of researchers said they considered that there is a 5 per cent chance of human extinction or other extremely bad AI-related outcomes. “It’s an important signal that most AI researchers don’t find it strongly implausible that advanced AI destroys humanity,” says Katja Grace at the Machine Intelligence Research Institute in California, an author of the paper. “I think this general belief in a non-minuscule risk is much more telling than the exact percentage risk.” But there is no need to panic just yet, says Émile Torres at Case Western Reserve University in Ohio. Such AI expert surveys “don’t have a good track record” of forecasting future AI developments, they say. A 2012 study showed that in the long run, AI expert predictions were no more accurate than non-expert public opinion. This new survey’s authors also pointed out that AI researchers are not experts in forecasting the future trajectory of AI.
Full survey : There’s a 5% chance of AI causing humans to go extinct, say scientists.