Ever since generative AI exploded into public consciousness with the launch of ChatGPT at the end of last year, calls to regulate the technology to stop it from causing undue harm have risen to fever pitch around the world. The stakes are high — just last week, technology leaders signed an open public letter saying that if government officials get it wrong, the consequence could be the extinction of the human race. While most consumers are just having fun testing the limits of large language models such as ChatGPT, a number of worrying stories have circulated about the technology making up supposed facts (also known as “hallucinating”) and making inappropriate suggestions to users, as when an AI-powered version of Bing told a New York Times reporter to divorce his spouse. Tech industry insiders and legal experts also note a raft of other concerns, including the ability of generative AI to enhance the attacks of threat actors on cybersecurity defenses, the possibility of copyright and data-privacy violations — since large language models are trained on all sorts of information — and the potential for discrimination as humans encode their own biases into algorithms.
Full analysis : Governments worldwide grapple with regulation to rein in AI dangers.