top of page
Writer's pictureDeborah

On January 26, the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) released a new Guidance and a Companion Playbook on artificial intelligence (AI) risk management.


The Guidance further explains some foundational notions such as the risk, tolerance and prioritization of risk, which may include harm to people, organizations and the ecosystem.


It also outlines the challenges of managing these AI risks, notably:

  • Measuring the risks related to third-party software, hardware and data

  • Monitoring emergent risks

  • The lack of reliable metrics

  • Different risks according to the stage of the AI lifecycle

  • The opaque nature of AI

  • The difficulty to systematize a human baseline activity and

  • The difference of risks in the real-world versus test environments


The Guidance also outlines characteristics of trustworthiness in AI systems, including valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with harmful bias managed.


To manage AI risks and responsibly develop trustwork AI systems, the Guidance explains that organizations should rely on four functions: govern, map, measure and manage.


The Guidance may be used on a voluntary basis by organizations designing, developing, deploying or using AI systems to help manage the many risks of AI technologies.


Recent Posts

See All

Comments


bottom of page