The U.S federal government is advocating for artificial intelligence developers to embrace security as a core requirement, warning that machine learning code is particularly difficult and expensive to fix after deployment. The Cybersecurity and Infrastructure Security Agency in a Friday blog post urged that AI be secure by design – as part of CISA’s ongoing campaign to promote aligning design and development programs with security from the start. “Discussions of artificial intelligence often swirl with mysticism regarding how an AI system functions. The reality is far more simple: AI is a type of software system. And like any software system, AI must be secure by design,” the agency said. Security experts across the world have for years been pushing companies to develop software and products with security baked in rather than added as an afterthought. The era of treating security as an externality whose costs are born by consumers should be replaced by new commitment to security, including through a shift in liability to software developers, CISA Director Jen Easterly said in a February speech. CISA’s Friday blog post doesn’t discuss legislative proposals, but it does highlight previous research that draws attention to machine learning’s tightly coupled nature. Changing one input changes everything, according to a 2014 paper by Google researchers that celled unresolved difficulties “the high-interest credit card of technical debt.”
Full story : US CISA Urges Security by Design for AI.