Former OpenAI board member Helen Toner said leading AI companies should be required to share information with the public about the capabilities and risks of the technology they’re building, and collect more data when these tools go awry. AI companies should have to “share information about what they’re building, what their systems can do, and how they’re managing risks,” Toner said in a talk at the TED conference in Vancouver on Tuesday, one of her first public appearances since resigning from OpenAI’s board late last year. Toner also called for “AI auditors” to be allowed “to scrutinize their work so that the companies aren’t just grading their own homework.” Toner, a director at Georgetown University’s Center for Security and Emerging Technology, was part of the board that ousted OpenAI Chief Executive Officer Sam Altman from the company in November. In the run-up to his firing, Altman attempted to have Toner removed from her seat after she co-authored a research paper containing some criticism of OpenAI’s safety practices, Bloomberg previously reported. Altman quickly returned to his role after more than 90% of OpenAI’s staff threatened to quit over his ousting. Most of the board, including Toner, was replaced. In her TED talk, Toner shared recommendations about how society can better govern AI. She said tech companies can set up “incident reporting mechanisms,” similar to what happens after plane crashes, to collect data on what went wrong in situations, such as an AI-enabled cyberattack.
About OODA Analyst
OODA is comprised of a unique team of international experts capable of providing advanced intelligence and analysis, strategy and planning support, risk and threat management, training, decision support, crisis response, and security services to global corporations and governments.