A voluntary framework published by the National Institute of Standards and Technology providing guidance for managing AI risks throughout the system lifecycle. The framework is organized around four core functions: Govern (establishing accountability, policies, and culture), Map (understanding context, stakeholders, and potential impacts), Measure (assessing and tracking risks through evaluation and monitoring), and Manage (prioritizing and responding to identified risks). The framework is widely referenced in U.S. federal procurement, various AI Executive Orders, sector guidance, and enterprise customer requirements. NIST also published the Generative AI Profile (AI RMF 600-1) addressing risks specific to generative AI systems. Alignment with the NIST AI RMF is frequently requested in vendor assessments and can support reasonable care arguments, though "alignment" is self-assessed and does not involve certification or audit.
See: AI governance; Executive Orders on AI; ISO/IEC 42001; Risk assessment; Trustworthy AI