The U.S. National Institute of Standards and Technology will publish the first iteration of its Artificial Intelligence Risk Management Framework Jan. 26. The NIST described the voluntary standard as aiming to "improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation" of AI offerings. A companion AI playbook containing suggested actions, references and documentation guidance is attached to the framework. Finalization follows a public comment period on the second draft of the framework released in September 2022.