New Help for AI Risk Analysis

We can never get enough help assessing the risks of artificial intelligence these days, so today let’s call out a new effort from the National Institute of Standards and Technology that internal auditors, IT auditors, and cybersecurity risk professionals might find useful. 

NIST launched a pilot program last week called ARIA (Assessing the Risks and Impacts of AI), which invites organizations to submit AI tools they develop to NIST for rigorous testing and risk analysis. The goal is to help organizations developing AI systems to determine whether those systems will be valid, reliable, safe, secure, private, and fair — ya know, all the stuff that regulators want AI to be — once they are deployed in the wild.

For now the pilot program will focus only on large language models (LLMs) that fuel the generative AI applications cropping up all over the place lately. Participants can submit their generative AI app to ARIA for review, and trained NIST experts will then test the app against various technical metrics (to see whether the app can be hacked) and study its performance in large-scale field tests (to see “how the public consumes and makes sense of AI-generated information in their regular interactions with technology,” according to a description of the program). 

NIST launched the ARIA program to fulfill an executive order on artificial intelligence that the Biden Administration issued last year. That order directed NIST and its parent agency, the Commerce Department, to devise new ways to assess the risks of AI. ARIA builds on the AI Risk Management Framework that NIST published in 2023.

Why should companies care about any of this? Because lots of you are developing generative AI applications (or integrating AI applications from other vendors into your own operations) without fully understanding what the risks of those AI systems are. That’s not necessarily your fault; nobody fully understands what the risks of AI are yet. But the more help a company can get to assess its generative AI — especially field-testing, with potentially thousands of users, where you can see how they use the app and what they do after interacting with the app, which is what ARIA plans to offer — the better. 

This challenge has been on my mind since earlier this year, when the New York City Bar Association published a thoughtful analysis of how AI might help with anti-money laundering compliance

That paper stressed that AI applications are really just sophisticated software models, so as AI becomes more mainstream, the importance of “model management” will become a higher priority. Companies will need tools and techniques to test how well those AI-driven software models perform — including some quite elusive “downstream risks” such as algorithmic discrimination and incorrect answers. ARIA is an attempt to get better at those tasks.

As we mentioned earlier, ARIA is still in its preliminary stages. Interested parties can sign up for more information through the ARIA website or by emailing the program directly at [email protected].

Either way, the more help society can get for assessing AI risks, the better. 

Leave a Comment

You must be logged in to post a comment.