Talking AI and ‘Model Risk’

Today we have a heads up for all you compliance officers in the financial services sector: Radical Compliance and Forensic Risk Alliance will be hosting a webinar on Jan. 29 exploring the risks of artificial intelligence, and the new governance and risk management methods you’ll need to develop to keep those AI risks in check.

The event will be on Wednesday, Jan. 29, at 8 am ET/2 pm CET — a bit early for U.S. audiences, but we wanted a time that works for European audiences as well since compliance with the EU AI Act will be one significant part of the discussion. You can register through the link above and we encourage everyone thinking about artificial intelligence (which should be everyone) to attend.

Our goal with the webinar is to explore four significant questions for AI compliance these days:

  • How is regulation evolving in relation to AI?
  • How will old ideas of “model risk” apply to artificial intelligence?
  • What new roles and responsibilities might businesses need to address privacy, security, and data validation concerns?
  • How can you monitor and adjust AI performance over time to stay within regulatory guidelines?

Perhaps the most important point above is that second one, about model risk: the chance that algorithms you’ve built to process data are somehow flawed, and then give you a bad result. 

Model risk is nothing new in the financial services world; analysts use models all the time to monitor risk or figure out trading strategies, and large firms will routinely have a “vice president of model risk” or some similar role to assure that the firm’s models have been configured correctly and are behaving properly.

The issue today is that artificial intelligence poses new types of model risk that firms haven’t encountered before. So how do you identify those risks? How do you figure out the right set of policies and procedures to govern those risks? How do you test and validate AI behavior? 

That’s the challenge ahead for compliance officers, which we want to unpack in our webinar.

The Perils of AI Models

If you want an example of this challenge in practice, consider how AI could be used to help with anti-money laundering compliance. AI has the potential to be a fantastic tool for transaction monitoring, where the algorithm can analyze more data, more quickly and more accurately, than a human ever could. That would help you identify more suspicious transactions, more accurately and more quickly.

Sounds great, right? Now let’s consider all the compliance challenges hidden within that nifty idea.

  • How do you control its consumption of non-public data, including data that might be subject to various data privacy regulations? 
  • How do you assure that the data you feed into the system is accurate, especially if you’re relying on data purchased from a vendor or data sourced from social media platforms? 
  • How do you test the results for accuracy and fairness, especially if you cannot easily inspect the AI source code itself?
  • How can you test the results for reliability, if the AI model keeps evolving and may give you different answers over time? 

Typically financial firms try to manage these model risks with — wait for it — model risk management. They use a framework to govern the risks, financial regulators examine those frameworks during regulatory examinations, and so forth.

Except, that typical approach to model risk may no longer work once artificial intelligence enters the picture. The New York City Bar Association published a paper last year on this very subject, and here’s the key passage:

The complexity of AI models creates challenges for typical [model risk management] functions. For example, the increased complexity of model inputs and the ways in which models evolve may make traditional MRM processes less effective. Monitoring outputs and performance may also make sense for AI MRM rather than the more traditional MRM that focuses on assessment of inputs. 

Now we’re getting to the heart of the matter. How are companies going to refashion their model risk management for this new, more complex world of artificial intelligence? 

Model Risk for Everyone!

This, I think, is the principal challenge that AI poses for compliance officers. If AI is going to start handling business processes that humans historically would perform, then you’re replacing the human with a software model. So how are you going to manage the model risks of that new working environment? 

Moreover, these new model risks aren’t confined to the financial services world. As AI use-cases proliferate across whole business sectors, so will model risk. If the financial services world is still racing to understand the challenge here, when they’re the most savvy and sophisticated risk management folks out there — what does that portend for compliance and audit professionals in other sectors?

All that and more is what we want to discuss on our webinar. I do hope you’ll join us!

Leave a Comment

You must be logged in to post a comment.