top of page
Search

Understanding NIST AI 100-1: A Roadmap for Trustworthy AI

  • caseybond2
  • Sep 16
  • 3 min read

Artificial Intelligence is transforming industries, companies, individuals, and ultimately the entire world. When looking under the cover of AI, you will learn that the main requirement is data -- massive amounts of data. AI uses data to learn and develop into a more accurate system. The volume of data and types of data it has access to are where the ethical and societal risks appear.

AI isn’t inherently good or bad—it’s a mirror. It reflects the values, biases, and intentions of the people who build and deploy it. So the real question isn’t “should we fear AI?” but “how do we ensure it serves humanity rather than undermines it?” To ensure that humanity is using it correctly, NIST (National Institute of Standards and Technology) released AI 100-1, formally known as the Artificial Intelligence Risk Management Framework (AI RMF 1.0).


ree

What Is NIST AI 100-1?

AI 100-1 is a globally recognized voluntary framework designed to help industries and organizations manage the risks associated with the adoption of artificial intelligence. AI poses a risk only when you fail to operate, train users, or establish "guardrails" for your implementations. "By adopting this framework, organizations can align with industry best practices, improve the reliability of their AI systems, and foster trust among stakeholders, including customers, employees, regulators, and investors." (Modi, 2025)

Companies should use this framework to ensure they are protecting their employees and customers. "AI risk management offers a path to minimize the potential negative impact of AI systems, such as threats to civil liberties and rights, while also providing opportunities to maximize positive impacts. Addressing, documenting, and managing AI risks and potential negative impacts effectively can lead to more trustworthy AI systems" (Tabassi, 2023)


ree

The Four Core Functions!

The framework is created with four core functions to guide organizations through the AI risk management lifecycle:

  1. Govern - Establishes a comprehensive, organization-wide approach to AI risk management by embedding governance structures, aligning technical and ethical principles, ensuring legal compliance, and fostering a transparent, risk-aware culture throughout the AI system lifecycle.

  2. Map - This function is about organizations understanding the full context in which an AI system operates. This enables them to identify and plan for risks more effectively. When deploying AI systems, it is essential to ask the right questions before fully implementing them.

  3. Measure - using quantitative, qualitative, or mixed-method tools and methodologies to thoroughly analyze, assess, benchmark, and monitor AI risks. Measuring AI risk and documenting system functionality and trustworthiness is imperative.

  4. Manage - Pushing the use of risk resources to map and measure risks found in the GOVEN function. In this function, plans are created to respond to, recover from, and communicate incidents or events.


Why It Matters?

Traditional governance, risk, and compliance (GRC) and cybersecurity challenges are addressed through a wide array of frameworks and methodologies. While comprehensive, this abundance often leads to overlapping requirements and complex implementations that can overwhelm organizations.

In contrast, AI systems present a newer and limited set of risks, and the lack of established, unified guidance makes it difficult to ensure effective governance, risk management, and compliance. The NIST AI Risk Management Framework (AI RMF 1.0) provides a structured and adaptable methodology specifically designed to address these challenges, offering organizations a clear path toward the responsible and trustworthy deployment of AI. "Both AI and traditional software technologies and systems are subject to rapid innovation. Technology advances should be monitored and deployed to take advantage of those developments and work towards a future of AI that is both trustworthy and responsible" (Tabassi, 2023)


ree

Looking Ahead!

According to NIST, they are committed to updating the framework as AI technologies Evolve. The following planned formal review is scheduled for 2028; however, the GRC, cyber, and technology communities that advocate for AI innovation and operate in the AI realm can influence updates before the 2028 review. In my opinion, organizations that adopt this framework now will be more prepared for the future growth and changes that artificial intelligence brings to the world. I believe that AI implementation is for the better as long as we humans ensure Governance, Risk, and Compliance are at the forefront of our minds when implementing AI systems.


The NIST AI 100 covers a broader scope and provides more detailed implementation guidance for the framework. I encourage everyone to read this for themselves and create their own mindset on the framework. Keeping GRC and Cyber mindsent in the forefront of our minds is the key to ensuring we keep data, people, and employees safe.



References

 
 
 

Comments


bottom of page