AI Regulations in China
AI Regulations in the European Union (EU)
AI Regulations in the US
AI Regulations in India
Model safety
Synthetic & Generative AI
Model Performance
ML Monitoring
Explainable AI
AI Regulations in the US

Artificial Intelligence Risk Management Framework (AI RMF 1.0)

Managing risks and promoting trustworthy AI systems in organizations

The National Institute of Standards and Technology (NIST), in January 2023, released the AI Risk Management Framework Version 1.0. This voluntary framework is for managing risks and promoting trustworthy AI systems in organizations that develop, design, or use AI-related products and services.

Since the use of AI systems can have both positive and negative impacts, the NIST AI Framework aims to minimize the negative impacts and maximize the positive ones. By providing a structured approach, this framework assists organizations in identifying, assessing, and mitigating risks associated with AI systems. This framework promotes ethical AI adoption, guards against potential harm, encourages accountability and transparency in AI implementation, and protects individuals' rights, making it easier for organizations to navigate the complex world of AI technology.

The framework is divided into two parts:

Part I:

Part 1 of the NIST AI Framework focuses on identifying risks and harms stemming from the use of AI systems and the associated challenges in AI Risk Management. The potential harms are categorized into three groups:

  • Harm to people (involving civil liberties, rights, safety, and economic opportunities)
  • Harm to organizations (affecting business operations, security, financial loss, and reputation)
  • Harm to ecosystems (impacting interconnected elements, global systems, and natural resources)

Additionally, the document highlights various challenges in effectively managing risks to ensure the trustworthiness of AI systems.

Part II:

In Part 2, the "Core" of the NIST AI Framework is outlined, consisting of four key functions—GOVERN, MAP, MEASURE, and MANAGE. These functions are designed to assist organizations in addressing threats posed by AI systems. GOVERN is applicable across all phases of an organization's AI risk management processes, while MAP, MEASURE, and MANAGE are specific to AI systems and particular stages of the AI lifecycle. The AI RMF Core offers outcomes and actions to facilitate dialogue, understanding, and activities for managing AI risks and fostering the responsible development of trustworthy AI systems.

Liked the content? you'll love our emails!

Thank you! We will send you newest issues straight to your inbox!
Oops! Something went wrong while submitting the form.

See how AryaXAI improves
ML Observability

Learn how to bring transparency & suitability to your AI Solutions, Explore relevant use cases for your team, and Get pricing information for XAI products.