Political leaders are scrambling to respond to advances in artificial intelligence. With applications from marketing to health care to weapons systems, AI is expected to have a deep effect across society and around the world. Recent developments in generative AI, the technology used in applications such as ChatGPT to produce text and images, have inspired both excitement and a growing set of concerns. Scholars and politicians alike have raised alarm bells over the ways this technology could put people out of jobs, jeopardize democracy, and infringe on civil liberties. All have recognized the urgent need for government regulation that ensures AI applications operate within the confines of the law and that safeguards national security, human rights, and economic competition. From city halls to international organizations, oversight of AI is top of mind, and the pace of new initiatives has accelerated in the last months of 2023. The G-7, for example, released a nonbinding code of conduct for AI developers in late October. In early November, the United Kingdom hosted the AI Safety Summit, where delegations from 28 countries pledged cooperation to manage the risks of AI. A few weeks after issuing an executive order promoting “safe, secure, and trustworthy” AI, U.S. President Joe Biden met with Chinese President Xi Jinping in mid-November and agreed to launch intergovernmental dialogue on the military use of AI. And in early December, EU lawmakers reached political agreement on the AI Act, a pioneering law that will mitigate the technology’s risks and set a global regulatory standard.
Full opinion : The Premature Quest for International AI Cooperation.