AI Risks Keep Racing Ahead

Today I want to talk about the advantages and perils that artificial intelligence can pose to corporate compliance programs. After all, everyone else seems to be talking about the potential of AI these days, including the AI programs themselves. 

You may have seen that the latest AI darling is ChatGPT, a natural language chatbot released by the company OpenAI last week. ChatGPT spends its time indexing written text that’s already on the web, learning from that material so that it can answer questions from humans — including requests that ChatGPT compose poems, essays, song lyrics, or other written materials. 

Like, say, a defense of corporate compliance programs. 

This came to my attention from Kirsten Liston, CEO of the ReThink Compliance consulting firm in Colorado. Liston asked ChatGPT to write a LinkedIn post about why building a culture of compliance matters at a large global company. Within seconds, ChatGPT gave her the following response, which Liston promptly posted on LinkedIn.

 

 

Great answer, right? It’s clear, concise, grammatically flawless, and logically sound. More to the point,  this is an answer compliance professionals would actually give — say, during a board presentation, employee training, or a webinar. I’ve heard people make pretty much the exact same argument, using the same language, countless times. Heck, I’ve probably given that answer myself.

Compliance officers shouldn’t look at what ChatGPT can do and fear that it will put you out of a job. Without human direction to get it started, ChatGPT can’t do anything. It is a tool to help humans pursue their objectives more efficiently, such as when you might need to draft a new policy or need advice on how to structure a new procedure. 

That’s the true power of ChatGPT: it can churn out such high-quality material, so quickly, with so few and simple instructions from humans. I marveled at that power for a few moments. Then a question occurred to me.

What would ChatGPT say if I asked it to oppose corporate compliance programs?

‘Companies Should Not Disclose’

I went to the OpenAI website, which allows anyone to create a free account and start talking with ChatGPT. The process took less than two minutes, and then I asked ChatGPT to do me a favor: “Write a short essay on why companies should not disclose violations of the Foreign Corrupt Practices Act.” 

In less than 10 seconds, the following answer flowed across my computer screen.

 

AI

 

Let’s not kid ourselves, compliance and ethics enthusiasts: that is a great answer too. Just like the answer Liston received, this one is also an argument I’ve heard humans make many times before — and if you put this anti-disclosure argument in front of the legions of people who don’t eat, breathe, and sleep corporate compliance programs like we do, the vast majority would probably read it and say, “OK, that sounds logical and persuasive to me” and then keep quiet.

This is the real peril that AI poses right now, both for compliance and audit teams specifically and for corporations generally. AI’s potential for misuse is running away from our collective discussion about what guardrails society should place around it.

Go back to ChatGPT’s essay on why companies shouldn’t disclose FCPA violations. Its answer is wrong simply because it started from a flawed premise. We need to assure that people use AI starting from correct premises, objectives, and moral assumptions. Otherwise the rest of us will be overwhelmed by a flood of pseudo-arguments — arguments that are persuasive, compelling, clearly understood, and also just plain wrong. 

Good luck guiding employee behavior in that world. 

Other Issues in AI

Compliance and audit professionals have other, more pedestrian issues to consider with artificial intelligence, too. Most of those issues revolve around how the AI your company uses might interact with humans. 

For example, say your company uses an AI tool to screen potential job applicants or to make decisions about credit extended to customers. How do you assure that the AI doesn’t botch that decision? After all, pulling incorrect data about someone isn’t a far-fetched scenario, and that could lead to painful consequences for the candidate: a job not given, credit not extended, or a higher interest rate imposed. 

In traditional human interactions, those mistakes are much more likely to be identified and resolved quickly. (“I’m sorry, Mr. Candidate, but that felony conviction from a few years back means we’re denying the loan.” “What? What felony? That’s a mistake.”) How can we build similar failsafe procedures into AI-driven processes, where the candidate might never know why an adverse outcome happens? How do you tell an automated decision that you want to speak to the algorithm’s boss?

We’re really talking about a few things here. One is process integrity: the assurance that your business processes that incorporate AI are using valid data and sound software code, so that they reach acceptable decisions. How is an audit team going to assess that? How is a compliance team going to assure that the outcomes aren’t somehow flawed anyway, such as credit decisions inadvertently discriminating against minority applicants? 

A related issue will be data integrity. How do you know that the data feeding your AI algorithm is complete, accurate, and unbiased — especially if you obtain that data from a third party? Have you studied that vendor’s controls over its data? Have you evaluated the assumptions it makes about the data? Or if you let an AI algorithm crawl the internet for material, how do you assure it doesn’t ingest garbage from someone else? (See ChatGPT, above, helping people to produce high-quality garbage by the terabyte.)

Quite simply, there are a huge number of considerations we need to think through, before rushing off to use AI for hiring decisions, credit decisions, legal advice, answering employee hotlines, and plenty of other tasks. The Biden Administration is at least aware of such issues; last fall it released a Blueprint for an AI Bill of Rights, but that blueprint is still a collection of good ideas more than practical guidance corporations need. 

Meanwhile, the AI chatbots keep racing ahead.  

Leave a Comment

You must be logged in to post a comment.