Here’s some good news for artificial intelligence (AI) risk management: After years of warnings from cybersecurity, data science, and machine learning (ML) advocates, CISOs are finally paying attention. This is the year that cybersecurity professionals are waking up to the multilayered risks surrounding AI. The hard part now is figuring out what comes next. What substantive steps do CISOs, executives, the board, and AI/ML developers need to take to set and enforce sane risk management policies? That is the big question that a lot of attendees at Black Hat USA are asking, and it has threaded its way through a number of briefings and keynotes at this year’s conference. As the co-author of Not with a Bug, But with a Sticker: Attacks on Machine Learning Systems and What To Do About Them, Hyrum Anderson is a prolific AI security researcher and vocal advocate for raising awareness of AI risk and resilience issues. Anderson says the discussions at the podium and in the hallways at Black Hat are a continuation of what he saw earlier in the year at the RSA Conference (RSAC). Even if these problems aren’t even close to being solved yet, he says he’s just glad that the conversations are finally happening.
Full story : The Hard Realities of Setting AI Risk Policy.