NIST Prioritizes External Input in Development of AI Risk Management Framework
According to new reports, National Institute of Standards and Technology officials are seeking the insights of a range of different technology industry players as they work on drafting Congressionally-directed guidance that would aim to promote the responsible and safe usage of artificial intelligence technologies. The framework is aimed at building the public’s trust in the technology. A recent request for information revealed that those who participate will be involved in the design, development, usage, and evaluation of AI. Responses are due on August 19 and will inform the framework during its drafting process.
Although AI developments are transforming several different industries and aspects of life, they present new technical and societal challenges. There is currently no objective standard for ethical values when it comes to the implementation of AI. It is widely accepted that AI must be made, assessed, and deployed in an ethical way that fosters public confidence in the technology. AI systems should be built to align with core values in society, states the request for information.