My ChatGPT Compliance Misadventure

Today I want to relay a dispatch from the front lines of artificial intelligence and corporate compliance. Namely, I’ve been trying to use ChatGPT for a compliance project, and ChatGPT has pretty much made a mess of things.

The project itself is straightforward. A client has asked me to find the texts of whistleblower laws in European Union countries, and then write short English-language summaries of each one. We knew that finding the texts would be easy, although not all of them would have companion English translations. Then inspiration struck — this work should be tailor-made for ChatGPT! 

I could copy the text in the original language, drop it into ChatGPT, and have ChatGPT translate the text into English. Then ChatGPT could write an 800-word summary of that English translation, and I could refine the copy for final production. Easy, right? 

Alas, that clever plan derailed immediately, even using a text I found in English. What went wrong speaks volumes about the perils of ChatGPT generally, and its perils for corporate compliance officers especially. Here’s what happened.

I started with Latvia, specifically because I knew nothing about Latvia’s whistleblower law and wanted to see how ChatGPT could help me navigate new material. I found the text of Latvia’s whistleblower protection law almost immediately using Google, including an official English translation of the text. Read the document, and you can see that this law was adopted in early 2022 to conform with the EU Whistleblower Directive, and that it supersedes Latvia’s previous whistleblower law from 2018.

Then I opened ChatGPT and asked: “Can you give me an 800-word summary of Latvia’s whistleblower protection law?”

In less than a minute ChatGPT did exactly that — except, the very first sentence read, “Latvia’s whistleblower protection law, known as the Law on the Protection of Whistleblowers, was introduced in 2018.” 

Well, hold up. That’s not what the law is called, and that’s not when the law was adopted. I had the actual text open on my screen. Figure 1, below, shows the top of that page. 

 

Source: Latvia

 

Anyone can see that the law is simply called “Whistleblowing Law.” In the box on the right you can see that it was adopted in 2022. (I highlighted the relevant line in blue.) 

Oh boy. If ChatGPT was wrong on that, what else was it getting wrong? 

ChatGPT Becomes ChatBS

I then started asking ChatGPT specific questions about the Latvian whistleblower law. For example, does the law allow for anonymous reporting? ChatGPT told me this:

Yes, Latvia’s whistleblower protection law allows for anonymous reporting of violations of the law or ethical norms. Article 8 of the law states that whistleblowers may report violations either directly or through a representative, including anonymously. The law requires employers to establish safe reporting channels that allow for confidential and anonymous reporting of violations. However, the law also requires that whistleblowers provide enough information to enable an investigation of the reported violation.

None of that is in the actual text. First, the law is structured as a series of sections, not articles. Second, whistleblower protection is discussed in Sections 10 and 11, not Section 8. Third, those two sections only talk about protecting a whistleblower’s identity (you’re supposed to pseudonymize the whistleblower’s personal data immediately, and keep it confidential), not whether the report can be submitted anonymously. The word “anonymous” appears nowhere in the text. 

As far as I can tell, Latvia’s law talks extensively about protecting whistleblowers, but never actually says, “Thou shalt allow anonymous reports.” It certainly doesn’t say that whistleblowers must provide enough information to launch an investigation, which is an idea that’s downright nutty.

Then I asked ChatGPT about the potential fines for retaliating against a whistleblower. Its answer:

Employers who retaliate against whistleblowers can be subject to fines and other legal penalties. Specifically, Article 9 of the law sets out the consequences for retaliation against whistleblowers. Employers who retaliate against whistleblowers can face fines ranging from €290 to €4,000.

Again with the articles, and the wrong one at that! Section 9 of Latvia’s law talks about how unions and trade associations can be vehicles for anonymous reporting. Section 17 talks about retaliation penalties, and for a company those penalties max out at €2,800 rather than €4,000. 

Some people will note that ChatGPT has only indexed material into late 2021 and wouldn’t know Latvia’s 2022 law. That’s a fair point, but it doesn’t help ChatGPT. When I asked it to provide the URL of Latvia’s whistleblower law, it offered a URL that actually points to the country’s data privacy law. The word “whistleblower” is part of the URL, which apparently confused ChatGPT, but the title of the page clearly refers to personal data processing. When I did find an English version of the 2018 law, ChatGPT’s mistakes were still the same: referring to the wrong sections and giving me incorrect information.

You get the drift here. ChatGPT gave me a swift, well-written answer that was total baloney. It made stuff up. It gave me numerous material factual errors. This wasn’t specific to me, either; when I told my client about the challenges here, they replied, “Yeah, we tried the same thing with Italy’s law and had the same problems.” 

How is a compliance officer supposed to use something like that? 

A Tool Ready for Pre-Time, Not Prime Time

We can draw a few conclusions here. 

First, ChatGPT is perfectly fine to use in tasks where attention to detail doesn’t matter. For example, ChatGPT is great at drafting template policies for anti-discrimination, anti-retaliation, BYOD to work, and so forth. 

I’ve used it to draft such template policies myself, and I know other compliance officers who’ve done the same. We’ve all found the resulting product to be reasonably good. Of course a compliance professional still needs to give those policies a final review and perhaps fine-tune some of the language, but ChatGPT is still a lot faster and cheaper than asking a law firm to do that task for you. 

On the other hand, the more important the details are, the more closely you need to review what ChatGPT gives you — because, as evidenced by our Latvian misadventure above, ChatGPT just makes stuff up. 

This is going to raise some tricky cost-benefit analysis questions for compliance officers. For example, if it’s critical that the document you’re drafting has every precise detail correct, you can:

  1. Have a junior compliance or legal staffer draft it themselves, which will take more time, but is more likely to be correct and save you more time when you review it; or
  2. Have ChatGPT draft the document, which will take less time, but then you spend more of your (expensive) time reviewing every detail. 

We’ve touched on this concept in previous posts about ChatGPT: you need to identify “the human point” in the business process, where artificial intelligence ends and human intelligence begins

Right now, ChatGPT’s material isn’t trustworthy. So if the human point is at the end of the business process, that might actually cost you more time and money than if you moved the human point much closer to the beginning. 

And these questions all arose while I was handling one text in my native language. Now imagine handling multiple texts, in languages that you don’t necessarily understand. Imagine juggling texts that might apply in certain jurisdictions but not others. (I was only researching how EU member states are implementing the EU Whistleblower Directive. Try doing this exercise for, say, privacy laws around the world. Shudder.) 

The plain truth is that ChatGPT is enormously helpful for tasks where bland and boring will suffice — and as much as people might like to stereotype compliance programs with just those two adjectives, the reality is anything but. Tread carefully. 

Leave a Comment

You must be logged in to post a comment.