Fresh Thoughts on AI and Compliance 

A few weeks ago I had the privilege of moderating (yet another) webinar on artificial intelligence and its implications for corporate compliance functions. The discussion was excellent, and as usual I took lots of notes. For all you AI aficionados out there who missed it, I’ve recapped some of the best insights below.

First, one of the speakers introduced me to a concept in artificial intelligence that I hadn’t heard before, but could be an important one in corporate compliance circles: overtrust, where humans put too much faith into what the AI system is telling us. 

We can all envision a few examples of how overtrust might creep into our daily lives. You rely on your car’s navigation system so much, you stop second-guessing how it directs you through your hometown. You rely on Excel so much, you never confirm that its arithmetic is correct. You use spell-check so often, you’ve stopped proofreading your memos to the board.

Now imagine how overtrust might look in the ethics and compliance world. If we use generative AI to develop chatbots that answer employees’ ethics questions, will those employees stop reading the Code of Conduct? Will they simply rely on the chatbot’s judgment, rather than their own? Is that even such a bad idea? 

Honestly, I don’t know. I don’t love the prospect of reducing employees to mindless automatons, doing whatever the AI chatbot tells them to do. Then again, understanding how a potentially large set of regulations, company policies, and ethical directives fits into your work routine can be complicated. If an AI chatbot removes the subjective judgment from that process — subjective judgment which might be flawed, and get the company into hot water — isn’t that desirable? 

Or imagine how the compliance team itself might use AI. You could have an AI-driven third-party risk management system (vendors everywhere are racing to deliver exactly that), where you simply ask, “AI, which of my third parties pose the greatest compliance risk to me right now?” and the AI gives you a list. What evidence would you need to feel comfortable that its list is accurate? 

One answer would be for the AI to provide a risk score for each party on the list. It might even break down that score by various risk factors (corruption, allegations against the party, cybersecurity audit results, NGO reports on human trafficking, and so forth), so you can see where that risk score came from. Vendors are well aware that trust is critical for the success of these systems, and transparency into where these answers come from will be key to success. They’re working to prevent these issues.

Through all of this, however, we need to remember that the actual intelligence in AI — the trillions of parameters that generative AI uses to answer the questions we ask — is beyond our ability to audit. And as AI answers become increasingly more persuasive, overtrust will become an increasingly dangerous narcotic. 

That’s going to drive up the importance of you configuring the AI correctly on the front-end, so to speak. It will need a carefully calibrated diet of datasets and rules so that the answers it gives are in alignment with (1) the law; (2) company policies; and (3) ethical values. 

Those are all important issues for compliance professionals to keep in mind as AI keeps pushing into the field. Something tells me we’ll be grappling with them for a long time.

Fraud Risk, Compliance, and AI

Our webinar also picked up an interesting thread on AI-driven fraud risks. Consider this.

One well-established fraud risk is the business email compromise. Hackers spoof the CFO’s email address, and then send a fake email to someone in accounting to wire $10 million to an overseas account to close a deal; time is of the essence, don’t delay, blah blah blah. Someone wires the money, and then it’s gone. 

Business email compromises (BECs) are a pervasive scourge, but they’re not new. Companies know about them, and employ various anti-fraud procedures (say, confirming the wire transfer with another senior executive) to fight them. 

Now imagine BECs in an AI-enhanced world. Rather than a spoofed email, that employee in accounts payable gets a phone call of an AI impersonating your CFO’s voice: “This is an emergency, we need to grant this approval right now.” The AI might even add stress tones to the CFO’s voice, complete with a baby crying in the background or some other pressure tactic. 

The employee’s natural instinct (most people’s natural instinct, I suspect) will be to want to help. So how will your company strengthen its approval processes and employee training to resist these very sophisticated, compelling schemes? 

That’s going to be hard. We’ll be training employees to follow procedure no matter how compelling and persuasive the supposed CFO’s request is. We’ll be training them to put their blind faith in procedure — which brings us right back to the overtrust and Code of Conduct conundrum I mentioned earlier, where blind obedience to procedure felt wrong. Here it feels right. 

Again, I don’t know what the correct answers are to these AI scenarios. But they’re coming, and we’ll need to figure them out eventually.

Leave a Comment

You must be logged in to post a comment.