Some Compliance Principles for ChatGPT

The other week I moderated a webinar on ChatGPT’s implications for corporate compliance programs. ChatGPT itself was not invited to participate, but no worries: we humans on the webinar had plenty to say about its compliance risks all by ourselves. 

Those worries are well-founded. ChatGPT and its ilk have taken the technology world by storm and fascinated the executive mind — but they’re still prone to error, can’t work at scale, and operate in ways we don’t fully comprehend. They promise to transform business operations, but that transformation is going to be fraught with questions about what’s moral, what’s legal, and what either of those things has to do with employees working in the real world. 

Those are exactly the questions corporate compliance officers face every day. So of course you’re going to get pulled in this ChatGPT maelstrom.

Anyway, back to the webinar. Several major themes emerged that compliance officers would do well to contemplate.

‘First, Do No Harm’

This suggestion came from panelist Bruce Weinstein — aka “The Ethics Guy,” and someone who has been talking a lot about artificial intelligence lately. A technology as powerful as ChatGPT, he said, can do tremendous things both good and bad. Obviously companies want to reap only the good, but a reckless use of ChatGPT might unleash the bad. 

So as companies try to harness ChatGPT’s power, Weinstein said, they need to start from the principle of doing no harm. That has to be the guardrail as the company develops use cases for ChatGPT and puts them into practice.

For example, say your company wants to use ChatGPT as a chatbot for customer service; how would you assure that it doesn’t dispense bad information? Maybe employees want to use ChatGPT to create software code or Excel formulas; how would you test its answers for security before deploying them? Perhaps managers want to use ChatGPT to draft template policies; how do you assure that its policies conform to the law and ethical priorities? 

The Do No Harm principle acts as a brake on such ideas until executives can answer those questions. It forces people to think about the risks of introducing ChatGPT into a business process and devise ways to keep those risks in check. Only then would you proceed with whatever ChatGPT ambitions you have.

The question for compliance officers, of course, is whether senior management sees things from that Do No Harm perspective. Some clearly do; JPMorgan and other Wall Street banks, for example, have restricted employees’ use of ChatGPT on the job until the banks fully understand the security risks around it. 

Other companies might not be so restrained — especially if they’re smaller, less heavily regulated, or led by executives who tend to put profit and dazzle above prudence and resilience. They might need some persuasion from the CCO, the board, or other sensible voices in favor of prudence. On the other hand, the more committed your company is to good ethical principles, the closer you’ll be to the prudence-and-resilience end of things from the start.

ChatGPT and ‘The Human Point’

Another insight came from panelist Nick Gallo, co-CEO of Ethico and sponsor of our ChatGPT webinar. He was talking about how people have used computer technology for decades to do things the human brain can’t do alone. For example, when accountants want to determine the price of stock options, they use Excel to run a complex model known as a Monte Carlo analysis. 

“Well,” Gallo asked, “where is the human in that process?” 

That is an excellent way to think about artificial intelligence. If companies want to use AI widely and freely in what they do, they’ll need to identify where AI’s involvement in a business process begins and ends, and where the human takes over. I call this “the human point.” 

For example, if you want ChatGPT to draft a template anti-retaliation policy, identifying the human point in that process is easy: ChatGPT drafts the document, and then you the compliance officer review it. If you ask ChatGPT to translate that policy into a foreign language, it can; but you’ll still need a human who speaks that language to review the translation for accuracy.

But say the company uses AI to review job applicants, or to issue credit ratings to consumers applying for a loan. Where should the human point be in those processes? Do you want AI to weed out all applicants except two finalists, or to contact customers directly with their credit ratings? Because that’s what no human point would look like — and I bet both scenarios are well outside your compliance officer comfort zone. So maybe you move the human point closer to the beginning of the process, where AI will do relatively little screening.

A related question is whether a company should disclose its human point. For example, should you have a disclaimer on your website that says, “We may use AI as part of our recruiting process”? Or what about a medical practice saying it uses AI to review imaging scans? Or a law firm using AI to offer legal advice? (ChatGPT has already passed the bar exam, after all.)

My point is simply that companies will need to think about roles and responsibilities for AI just as much as we think about them for humans.

So that’s yet another message that compliance officers will need to bring to management as our ChatGPT dreams come into view. Who will be among the group that defines those roles and responsibilities? How will that group keep the Do No Harm principle fixed in their minds? How will the company revisit those roles and responsibilities over time, as both AI technology and your business evolve? 

It’s going to be a process — and from what I heard on the webinar, probably a mighty long one. 

Leave a Comment

You must be logged in to post a comment.