AI and Policy Chatbots, Part II

Today I want to return to the idea of using an AI-driven chatbot as a compliance policy adviser for employees. On one hand, the potential gains for your compliance program are clear; but are we miscalculating some of the potential risks that AI chatbots might bring to your program too?

This particular bee crept into my bonnet as I was reading the latest compliance benchmarking survey from law firm White & Case, released last week. Like all compliance benchmarking reports these days, this one (which polled 265 senior compliance and legal officers) included a discussion of how compliance officers are using artificial intelligence. Sixty-four percent said their teams were using AI to at least some extent, and 51 percent of that group said they use AI for “monitoring of employees accessing policies and procedures.” 

Hold up. Are we sure that tracking employees’ policy inquiries couldn’t end up causing a violation of whistleblower protection laws? 

Hear me out. Last week we had a fascinating case-study of a large IT services firm that rolled out an AI chatbot to its 23,000 employees worldwide. Once the chatbot went live, employees started asking it far more questions about the firm’s compliance policies than they ever asked the compliance department before it rolled out an AI policy chatbot.

That’s the good news: employees engaging with your compliance policies more often. 

But even as I listened to that presentation about how the AI policy chatbot worked, I wondered: Could a company have its AI chatbot track which individual employees were asking what questions about compliance policies? 

From a pure software development perspective, the answer is yes. You could design your IT systems so that employees log onto the work stations, and your friendly AI chatbot waits in the corner until a question arises. The employee asks the chatbot, and the bot can log exactly what question that employee asked. 

Now suppose that employee subsequently calls the whistleblower hotline to submit an anonymous report. What’s to stop AI from comparing the details of that report against previous policy inquiries, and deducing the probable identity of the whistleblower? 

From a pure software development perspective, the answer is nothing. 

AI and Blurred Lines Risk

This is an important point for compliance officers to ponder. As we embrace AI, the lines between one system and another will quickly become blurry. If you can use AI both as a policy management guru and a whistleblower hotline tool, the separation between those two systems will vanish. Both the policy guru and the hotline are just big troves of data, and AI will be able to suss out relationships between the two troves that humans previously couldn’t.

Except, sussing out relationships like that is illegal. Whistleblower protection laws around the world say that corporations must maintain hotlines that allow anonymous reporting. 

whistleblowerUntil now that duty was fairly easy to fulfill, because corporate compliance programs couldn’t perform the analysis necessary to identify probable whistleblowers (at least, not without considerable expertise and data analysis capability). Now, with properly configured AI, those obstacles fall away. So we’ll need to design compliance programs and technology that won’t do this, and that’s going to require a lot more forethought.

Let’s imagine how all this might look in practice. Say you have a policy chatbot humming along in the background, waiting for an inquiry. The employee then asks a few questions about the company’s anti-bribery policies and whether certain contract provisions might violate the policy. After answering, the chatbot asks, “Would you like to discuss this scenario with the compliance officer? Or remember, you can also use our anonymous hotline!” 

What would the back-end of all that look like? Your policy chatbot probably does want to track employee inquiries to at least some extent, but your whistleblower system needs to offer anonymity. So could you configure your IT systems to track policy inquires, but then switch tracking off if the employee jumps over to your internal hotline?   

I’m not a product implementation specialist, but I suspect those folks would tell us that yes, you can wall off one system from another — but we’re placing an awful lot of faith in the strength of those walls. How do you guarantee that one trove of data can’t be co-mingled with another for AI analysis? What about some clever IT person who subverts all that because he can manipulate your IT general controls and turn himself into a super-administrator? 

The Coming Compliance Program Singularity

My point is that AI will allow all the elements of a compliance program to become more consolidated and more integrated — but we won’t always want that. As our chatbot versus hotline example shows, in some circumstances that AI-powered consolidation could end up violating the law or how we want compliance programs to work. 

This will confront compliance officers with a few issues.

First, we need to remember how employees and others might view an AI-enhanced compliance program. To them, a chatbot that offers advice about policies and offers relevant training and asks them whether they’d like to report something to the compliance officer — it’s all one interaction. Compliance officers and regulators might perceive policies, training, and the hotline as separate elements of a compliance program, but employees won’t. To them, those distinct program elements have all collapsed into a single thing. 

Second, then, you’ll need to get better at planning the IT systems behind such a program. You’ll need to think more not just about systems integration, but also systems segregation: keeping certain systems or piles of data apart from each other, so AI won’t be able to do things such as help a miscreant manager identify an anonymous whistleblower. You’ll need to think more about IT general controls and data management. You’ll need to think more about working with the GRC vendors supplying all this technology.

Third, we’ll need to consider whether in some situations it will be wiser to voluntarily disarm — that is, to limit what AI might do for us, because those powers will be too tempting or too vulnerable to manipulation, or might even lead us into possible legal violations. 

I don’t profess to have all the answers to these questions. But clearly there are a lot of them, and they’ll need answers eventually.