As we previously discussed, earlier this year the National Institute of Standards and Technology (NIST) launched the Trustworthy and Responsible AI Resource Center. Included in the AI Resource Center is NIST’s AI Risk Management Framework (RMF) alongside a playbook to assist businesses and individuals in implementing the framework. The RMF is designed to help users and developers of AI analyze and address the risks of AI systems while providing practical guidelines and best practices to address and minimize such risks. It is also intended to be practical and adaptable to the changing landscape as AI technologies continue to mature and be operationalized. The first half of the RMF discusses these risks and the second half discusses how to address the risks. When the AI RMF is properly implemented, organizations and users should see enhanced processes, improved awareness and knowledge, and greater engagement when working with AI systems. The RMF describes AI systems as “engineered or machine-based systems that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy.”
Full opinion : U.S. concern about generative AI risks prompts NIST study.