ChatBots, Compliance, and Risk
Today I want to return to everyone’s favorite technology of the moment, artificial intelligence. To what extent could compliance officers incorporate AI chatbots into their Code of Conduct or internal reporting hotlines?
This has been on my mind lately because Air Canada recently gave us a fascinating example of the consequences of AI gone wrong. The airline had built an AI chatbot to answer customers’ questions, and that chatbot gave the wrong answer to a customer asking about refund policies. The customer took Air Canada to court, and a tribunal quickly gave the obvious answer: Air Canada had to honor the erroneous refund policy that its chatbot had invented.
So how would that common-sense principle apply to chatbots giving advice to employees? What precautions should compliance officers develop today, to assure that your company doesn’t wander into an AI-invented legal trap tomorrow?
Let’s start with the details of that Air Canada case. In November 2022 a Vancouver man, Jake Moffatt, wanted to book an emergency flight to Toronto after the death of his grandmother. He couldn’t decipher Air Canada’s policy for bereavement discounts, so Moffatt asked Air Canada’s AI chatbot for advice.
The chatbot recommended that Moffatt purchase his flight immediately and then request a refund within 90 days. That information was wrong. Air Canada’s actual policy said that the airline will not provide bereavement refunds after the flight is booked. Moffatt, however, followed the chatbot’s advice (which he neatly documented in a screenshot).
When Moffatt went looking for his AI-promised refund, Air Canada refused. As described in media reports, the airline argued that since the chatbot had also provided a link to the official bereavement policy, Moffatt should have read that link and understood the correct answer. Air Canada offered him nothing more than a $200 coupon for future flights.
Moffatt took Air Canada to court. On Feb. 14 a civil tribunal ruled in his favor, skewering Air Canada in this painful paragraph:
Air Canada argues it cannot be held liable for information provided by one of its agents, servants, or representatives — including a chatbot. It does not explain why it believes that is the case. In effect, Air Canada suggests the chatbot is a separate legal entity that is responsible for its own actions. This is a remarkable submission. While a chatbot has an interactive component, it is still just a part of Air Canada’s website. It should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot.
Moffatt got his refund, and good for him — but what does this case tell compliance officers about the potential for AI in your ethics and compliance programs?
‘A Static Page or a Chatbot’
I keep coming back to that last phrase from the tribunal, “It makes no difference whether the information comes from a static page or a chatbot.” In the world of internal ethics hotlines and Codes of Conduct, I’m not so sure that’s true.
After all, for many years Codes of Conduct were static pages; at many companies, they still are. The code is simply a PDF document that lists corporate policies, ethical priorities, mission statements, and the like. Such a code isn’t terribly user-friendly for employees, and it’s little more than wallpaper for the rest of your compliance program, but at least you know it’s accurate.
In recent years we’ve seen more companies move to an interactive Code of Conduct, such as a website or an app. Employees can visit the main page of the Code and then root around more easily, researching the exact issue that might be on their mind. You, the compliance officer, can provide multi-media examples of the policies they’re studying. You can also track user metrics for your code, and study that information to drive improvements.
I am a big fan of interactive codes — but they are a significant step forward from those static pages of a PDF-based code. You still don’t need to worry too much about inaccurate information, because someone (presumably you) is writing all that website copy. Maybe some sections fall out of date, but that’s not the same as an AI chatbot conjuring up a whole new policy out of thin air.
Except, AI chatbots are the next logical step in human interaction with technology. So how can compliance officers manage the risk of AI chatbots giving bad advice?
The more interactive and intelligent you make your Code of Conduct, the more it goes from a self-guided tour of the corporate policy manual to a voice that gives employees advice. Compliance officers need to consider the full implications of that, especially since generative AI still seems to have the bad habit of giving wrong advice.
Along similar lines, I worry about AI chatbots creeping into the internal reporting hotline. I appreciate the desire to take internal reporting digital (fewer calls on a phone line, more submissions by website, text, or app), but from there it’s a short leap to the internal hotline evolving into a two-way communication tool. At that point, someone is bound to say, “Couldn’t we automate some of that communication with a chatbot?”
Sure you could, technically; but the legal risks still seem daunting. When Air Canada’s chatbot screwed up bereavement fares, that cost the company perhaps a few thousand dollars in Moffatt’s refund and legal fees. If an AI-driven Code of Conduct or interactive ethics hotline gives bad advice, that could cost your company millions.
The Human Point in Compliance
What compliance officers really need to do here is identify the “human point” — that point in a business process where AI technology ends and human oversight begins.
For example, right now the human point for internal reporting is very near the start of the process: someone files a report on the hotline, and a compliance officer sees it right away. You decide on a response, and perhaps even write a reply, immediately. Technology makes no decisions for you about what advice the employee hears, other than perhaps an automated “We’ll get back to you soon, sit tight.” But what if AI allows you to move the human point further back? Are you comfortable with that?
The human point in a Code of Conduct is farther off, because the process is usually static: the employee wants to read a file and find an answer. He or she doesn’t necessarily need human counsel to do that. But if AI allows us to change the nature of that process, where the Code of Conduct can be an interactive and intelligent thing — where should the human point be there? I’m not sure.
I do believe AI can help companies, compliance officers, and mankind as we march into the future. The cautionary tale from Air Canada, however, is a reminder that we’re better off going slowly but surely rather than rushing ahead.