NIST Artificial Intelligence Framework

NIST has published its first-ever risk management framework for artificial intelligence, just in time to help risk and compliance professionals as your boards, senior management, and everyone else starts to wonder whether ChatGPT and similar AI-driven systems will take over the human race. 

Released last week, the framework is 48 pages long and no, you don’t need an advanced computer science degree to understand it. The first half is more an analysis of the risks that AI can pose, and how management teams should approach governance of those risks. The second half follows a more traditional framework design, breaking down the challenge into core functions, controls, and practices that an organization could implement.

Why release this framework now? Mostly because Congress told NIST in 2021 to develop an AI framework, and the agency spent the last 18 months consulting with various public and private interests to get that done. That said, we’d be foolish to ignore the good timing here. Within the last year, artificial intelligence tools such as ChatGPT, DALL-E, Midjourney, and Codex have all shown the corporate world just what AI can do — if we mere humans figure out how to use such tools wisely. 

Acting wisely is, of course, going to be the tricky part. Algorithmic bias, bad software code, or poor planning on the management team’s part could lead to all sorts of adverse consequences, both for companies and society at large. 

Figure 1, below, shows some of the potential harms from AI. 

Source: NIST AI Framework

Hence the AI framework, according to NIST undersecretary Laurie Locasio. The document “offers a new way to integrate responsible practices and actionable guidance to operationalize trustworthy and responsible AI,” she said. “We expect the AI framework to help drive development of best practices and standards.”

So what specific challenges and risk management issues does the AI framework encourage us to consider? Let’s take a look.

AI Risks Are Unlike Other Risks

The principal issue with AI is that companies rarely have full visibility into how it operates. For example, your company might develop an AI application that “learns” by studying data out there on the internet — data you didn’t create, and might not ever even see. Or your company might license an AI application from a vendor to manage a business process for you, but you don’t have visibility into the software code now making decisions on your company’s behalf. Or consumers might be interacting with an AI program, but not know whether they’re talking to a person or an app. 

Simply put, an AI system’s behavior can evolve in unexpected ways; and companies need to anticipate that risk. As the framework itself describes things:

AI systems may be trained on data that can change over time, sometimes significantly and unexpectedly, affecting system functionality and trustworthiness in ways that are hard to understand… AI risk management can drive responsible uses and practices by prompting organizations and their internal teams who design, develop, and deploy AI to think more critically about context and potential or unexpected negative and positive impacts.

That means you’re going to need some pretty sophisticated risk management controls. You’ll need technical controls to govern how the AI software is developed, tested, and used in a live environment; and you’ll need monitoring controls to assure that the AI’s operations don’t trigger any compliance or litigation risks.

For example, say you develop an AI system to review and issue credit decisions to customers. Even if you’re careful to confirm that the AI only looks at credit scores, employment history, income levels, and prior bad debts (good technical controls), the AI might still end up denying credit more often to minority customers (compliance nightmare). 

The big question for audit, risk and compliance leaders is whether your organization has the right people and oversight structures in place to address all these overlapping risks.

For example, who within your organization gets to decide how AI is developed and used? Should the board lay down oversight principles or forbid some strategic ideas? Can the IT department develop its own AI apps? Do First Line operation units have permission to use AI-driven services from third parties? If not, what’s the approval process? 

That’s just a small taste of the questions to come. 

Putting the AI Framework to Use

As mentioned earlier, the NIST AI framework also comes with a set of functions, controls, and practices that you can implement at your own organization to get a better grip on AI:

  • Govern
  • Map
  • Measure
  • Manage

Then each core function is broken down first into a set of “categories,” and then each category further divided into a set of sub-categories. (If any of you are thinking, “Hmmm, isn’t that kinda like the COSO internal control framework with its components, principles, and points of focus?” — yes, you get it.)

For example, Figure 2, below, shows some of the categories and sub-categories for the “Govern” core function.

Source: NIST AI Framework

Any of these core functions, categories, and sub-categories is worthy of a full post in their own right. For now, we’ll just say that risk managers can take this AI framework and map it to your current risk management processes; see which steps are already covered, and which ones aren’t. 

Then you can give a more thoughtful presentation to senior management and the board about where your organization is weak in the oversight of AI, and start to develop a plan to plug those oversight gaps.

That’s all for today. Disclosure: this entire post was written by a person, not ChatGPT.

Leave a Comment

You must be logged in to post a comment.