Another Way of Looking at AI Risk

Today we return to artificial intelligence, since these days compliance officers need all the good advice they can get on the subject. The New York City Bar Association recently published a paper on how AI might help with anti-money laundering compliance, and along the way raised several issues about AI that every compliance officer should contemplate.

For those compliance officers in financial services, the paper is a useful read because it examines the capabilities your AML compliance program needs to have and how technology can help with that. Even for compliance officers in other sectors, however, the paper is helpful because it helps you understand how AI really works — and therefore, what safeguards need to be in place so that you can use AI wisely.

The most important concept in the paper is that AI is a model. The AI takes data as an input, runs it through predetermined algorithms, and gives you a result. 

To that extent, AI isn’t much different than the spreadsheets people use to perform data analysis. You spend lots of time configuring the spreadsheet — that is, configuring the model — and then you input data and get a result. Different data will generate different results, but the model processing all that data remains unchanged. 

In the financial services world, people worry all the time about the accuracy of the model. It’s known as “model risk,” and large firms will routinely have a “vice president of model risk” or some similar title. Their job is to make sure that the data going into the model is complete and correct, and that the model itself has been configured correctly to give accurate answers. 

Then financial analysts go forth with their models and forecast earnings, or devise hedging strategies, or do whatever else it is that financial analysts do. The models are tools they use to perform their work routines.

So how would all that work when the model is artificial intelligence rather than a spreadsheet? What new opportunities arise, and what new risks need to be addressed? 

That brings us back to the NYC Bar Association paper.

The Perils of AI Models

First, the paper explains why something like AML compliance is such a compelling use-case for artificial intelligence. AI has the potential to be a fantastic tool for something like transaction monitoring, where the algorithm can analyze more data, more quickly and more accurately, than a human ever could. As the paper itself says, “One defining feature of AI systems is their ability to process large volumes of both structured and unstructured data.”

Sounds great! Now let’s consider all the compliance challenges hidden within that nifty idea.

  • How do you control its consumption of non-public data, including data that might be subject to various data privacy regulations? 
  • How do you assure that the data you feed into the system is accurate, especially if you’re relying on data purchased from a vendor or data sourced from social media platforms? 
  • How do you test the results for accuracy and fairness, especially if you cannot easily inspect the AI source code itself?
  • How can you test the results for reliability, if the AI model keeps evolving and may give you different answers over time? 

Typically financial firms try to manage these model risks with — wait for it — model risk management. Moreover, the firms are expected to use model risk management frameworks for consistency across the whole enterprise. GRC vendors and consulting firms sell such frameworks, financial regulators examine the frameworks during regulatory examinations, and life goes on.

Except, as the NYC Bar Association paper states, that typical approach to model risk may no longer work once artificial intelligence enters the picture:

The complexity of AI models creates challenges for typical MRM functions. For example, the increased complexity of model inputs and the ways in which models evolve may make traditional MRM processes less effective. Monitoring outputs and performance may also make sense for AI MRM rather than the more traditional MRM that focuses on assessment of inputs. 

Now we’re getting to the heart of the matter. How are companies going to refashion their model risk management for this new, more complex world of artificial intelligence? 

Model Risk for Everyone!

This, I think, is the principal challenge that AI poses for compliance officers. If AI is going to start handling business processes that humans historically would perform, then you’re replacing the human with a software model. So how are you going to manage the model risks of that new working environment? 

For example, go back to the AML example. Rather than train employees on how to gather data and study it carefully, so they can make good decisions about whether a transaction is suspicious — instead, you’ll need to spend more time on data validation controls at the beginning, and testing results at the end. (New York state regulators have proposed some useful rules along these lines, by the way.)

Yes, compliance officers had to consider those two issues even when humans were in the middle of your AML processes, but now you’ll need to worry entirely about those two issues, because there is no human in the middle. And we could say the same about many other processes, both inside the compliance department (say, third-party due diligence) and out (setting product prices or churning out targeted advertising, for example).

In short, as AI use-cases crop up across your whole enterprise, that means model risk will crop up across the whole enterprise too — and it will be a new, more complicated sort of model risk, where traditional model risk management might no longer work. So what new frameworks will come along to address that? What new roles and responsibilities will emerge among compliance, internal audit, IT, and business operations teams? 

We’ll be answering those questions for a long time. The NYC Bar Association’s paper simply brings them into sharp relief.

Leave a Comment

You must be logged in to post a comment.