Microsoft, OpenAI, Google and Anthropic have stepped up a united push towards safety standards for artificial intelligence and appointed a director as their alliance seeks to fill “a gap” in global regulation. The four tech giants, which this summer banded together to form the Frontier Model Forum, on Wednesday picked Chris Meserole from the Brookings Institution to be executive director of the group. The forum also divulged plans to commit $10mn to an AI safety fund. “We’re probably a little ways away from there actually being regulation,” Meserole, who is stepping down from his role as an AI director at the Washington-based think-tank, told the Financial Times. “In the meantime, we want to make sure that these systems are being built as safely as possible.” Tech giants have coined the term “frontier” to describe a subset of artificial intelligence with highly advanced and unknown capabilities, including the type that powers generative AI products such as Open AI’s ChatGPT and Google’s Bard. These are driven by large language models, systems that can process and generate vast amounts of text and other data. Concern has intensified over the past few months about the potential of increasingly powerful AI to displace jobs, create and spread misinformation, or eventually surpass human intelligence. Meserole asserted that the forum would seek to “supplement or complement” any official regulation but, “in the interim, while there’s a gap, we need to move forward with building these systems safely”.
About OODA Analyst
OODA is comprised of a unique team of international experts capable of providing advanced intelligence and analysis, strategy and planning support, risk and threat management, training, decision support, crisis response, and security services to global corporations and governments.