AI Chatbots and Policy Management
Compliance officers talk all the time about how artificial intelligence has the potential to transform the programs you run. Today I want to unpack an example of how that might work, courtesy of a case-study I saw last week involving AI and policy management.
The company in question is a global IT services firm (23,000 employees in offices around the world), which developed an AI chatbot to answer employee questions about gifts and entertainment, harassment, antitrust issues, and all the other compliance policies a large business typically has. Two of its compliance leaders presented their work last week to a packed session of the Society of Corporate Compliance & Ethics’ annual conference.
I won’t name the company here since the presenters didn’t know I was in the room (you can find them easily enough if you study the conference line up) — but I did take lots of notes on both how the team built their AI chatbot, and how an AI project like this can transform “traditional” compliance program management.
First, how they built the chatbot. The team used an AI software tool called Bryter, specifically developed to help corporate legal teams bring AI into their workflows; but otherwise handled the work in-house (as one would expect from an IT services firm). The company’s head of compliance training, policy, and communications was the day-to-day lead on the project, with help and oversight from the company’s vice president of legal transformation.
Coding the basic chatbot didn’t take long. The bot then churned through the company’s compliance policies to learn what those policies were; and the compliance team refined the front-end interface to make it user-friendly for employees. Essentially, the company took its compliance policy library and converted it into an interactive chatbot, akin to employees asking ChatGPT for policy advice.
Next came extensive testing. First the compliance training head tested the chatbot by himself, and tested it on one compliance policy at a time. As the bot improved, the testing team expanded to include more compliance and legal executives in more regions around the world, testing the bot on more challenging questions across more subjects.
After several months, they rolled out the bot worldwide at the start of 2024. They also gave it a name, Ethos, and a friendly robot avatar.
That’s the mechanics of it, at least. Other companies building AI chatbots for policy management will take their own approaches, depending on the talent, budget, and policy library you have.
Lessons Learned Along the Way
If you want to try a similar project at home, the presenters at SCCE offered plenty of tips they learned along the way.
First, keep the scope of the AI’s knowledge and behavior focused. You want all employees to receive answers that are clear and consistent, and at low risk for misinterpretation. So constrain the AI’s ability to be creative, in case it uses metaphors that don’t make sense or cracks jokes that miss their mark. Categorize the questions the chatbot might encounter, so you can exercise more control over the answers it gives.
One detail I found striking: the company uses the chatbot for questions about ethics and compliance policies only; not for questions about HR. Why? Because HR policies can differ from one country to the next, depending on local law; and that would be too complex for this chatbot to manage. Compliance policies, in contrast, tend to be consistent across the whole enterprise.
Second, plan your testing phase carefully. For example, an AI chatbot won’t give flawless advice from the start. That’s fine, but it means you need to create a mechanism for users to flag bad guidance — and a mechanism to digest that feedback so the AI will improve over time. Have a rough plan for pilot programs and expanded testing groups.
Your first goal should be to master a process to help the chatbot learn; then you can focus on helping it learn how to give better answers on specific compliance subjects.
Third, think about how to nudge the chatbot to focus on the ethics issues that matter, rather than the precise compliance policies. For example, one tester asked a question, “I want to fly a foreign government official to Hawaii, and the trip will cost $400. Is that allowed?” The bot replied that since the entertainment limit was $500, yes the trip was fine — missing the more fundamental issue of why the employee wants to lavish gifts on a foreign official at all.
You can steer an AI to focus on those more important and fundamental ethical issues; but success takes time, testing, and practice. Plan accordingly.
How a Chatbot Changes Your Program
What struck me most about this project, however, was that the introduction of artificial intelligence into policy management changed the compliance team’s whole approach to policy management.
For example, for many years now compliance officers have been told that if you want employees to understand policies, those policies should be short, clear, and simple. But when you use AI to explain policies to employees, the policies need to be longer and more precise, so that the AI can understand them more fully and give better answers.
In other words, when you use an AI chatbot for policy management, employees are no longer interacting with your policies directly. They’re interacting with the AI. The AI is the one interacting with your policies, and what it needs to understand your policies is quite different from what humans need.
Another striking outcome was that once the company rolled out the AI chatbot to employees, the employees started asking more questions about policy — 3,300 in the first 18 months, a rate far higher than what employees asked before the chatbot arrived.
In one sense, I think that’s good. It means that you’ve made your policies more accessible to employees, and they’re engaging with the policies more often.
Except, I also wonder whether employees engage with the chatbot simply so they can get a definitive answer about what to do. That is, perhaps they’re using the chatbot to cover their rear ends. They ask it a question, it gives them an answer, and now they have documentation — so that if something goes wrong in the future anyway, they can print out the original answer and say, “See? This is what the chatbot told me! You can’t blame me, I was just doing what the AI said!”
So by introducing a chatbot, the compliance team might end up interacting with employees less on corporate policies; but they’ll be servicing the AI chatbot more to be sure it gives correct policy answers. Rather than simplify your policy management workflows, you might just end up re-arranging the steps so you spend more time on the IT back-end than the human front-end.
That’s enough for this post, and kudos to the SCCE presenters for such a thought-provoking session. I left it more convinced than ever that while AI will transform how compliance programs work, the amount of work compliance teams have to do won’t be declining any time soon.