Just as we don’t allow just anyone to build a plane and fly passengers around, or design and release medicines, why should we allow AI models to be released into the wild without proper testing and licensing? That’s been the argument from an increasing number of experts and politicians in recent weeks. With the United Kingdom holding a global summit on AI safety in autumn, and surveys suggesting around 60% of the public is in favor of regulations, it seems new guardrails are becoming more likely than not. One particular meme taking hold is the comparison of AI tech to an existential threat like nuclear weaponry, as in a recent 23-word warning sent by the Center of AI Safety, which was signed by hundreds of scientists: Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Extending the metaphor, OpenAI CEO Sam Altman is pushing for the creation of a global body like the International Atomic Energy Agency to oversee the tech. “We talk about the IAEA as a model where the world has said, ‘OK, very dangerous technology, let’s all put (in) some guard rails,’” he said in India this week.
Full commentary : Is AI a nuke-level threat? Why AI fields all advance at once, dumb pic puns.