Grappling With Artificial Intelligence

Later this week I’ll have the privilege to moderate a panel discussion on artificial intelligence at the Society of Corporate Compliance & Ethics’ 2021 conference — and as fate would have it, COSO published guidance last week on the risk management challenges around AI. So let’s dig into the subject, since clearly the universe is sending a signal that AI needs risk and compliance professionals’ attention.

We can begin with the COSO guidance. It runs 32 pages long, and follows the usual format we’ve seen in previous COSO guidance: introduce the subject, explain the basics of the COSO enterprise risk management framework, and then walk through how one can tailor those ERM principles for the specific issue in question. 

So if you want to see how COSO’s 20 principles for enterprise risk management can be applied to artificial intelligence projects or risks, this guidance is certainly worth a read. It offers good examples of how AI can go wrong, as well as the planning and oversight you should develop to assure that AI goes right.

I was most interested, however, in the general discussion of artificial intelligence at the beginning. COSO flagged five major risks to corporations using AI:

  • Bias and reliability breakdowns because of inappropriate or non-representative data; 
  • Inability to understand or explain AI model outputs; 
  • Cyber attackers trying to obtain data or otherwise manipulate the AI model; 
  • Inappropriate use of data; 
  • Social resistance to rapid application and transformation of AI technologies.

COSO also included an excellent chart that maps out how confident businesses are in their ability to manage various AI risks. See below. 

 

Source: COSO

 

Why is this chart so informative? Because it demonstrates how the most pressing threats for AI right now are primarily an internal audit challenge (cybersecurity and operational risk), but ethics and compliance challenges (privacy, regulatory compliance, ethics issues) are not far behind.

The Tricky Stuff Is in Making Decisions

Tucked away on Page 5 of the guidance are a few lines about what AI is trying to achieve. They’re worth excerpting in full:

Many AI use cases implemented today are doing things humans can do, but doing them much faster and more efficiently. Over the next ten years, the emphasis will likely evolve to implementing AI to do things humans can’t do, because humans are unable to see the subtlety and nuances that AI can detect.

When you think about it, those two sentences explain the risk management concerns expressed in the chart above. 

That is, right now we’re adopting AI to make existing business processes and routine decisions happen faster. So the challenges are around the validity of the data we use and the models we build, and the cybersecurity of both. Internal audit teams can play a valuable role here, working closely with AI development teams to assure that your data and algorithms run soundly.

The next 10 years, on the other hand — they’re going to be about AI making judgments that humans can’t make. That’s much more of an ethics and compliance challenge. Do we want to entrust AI with making decisions on behalf of the company or the public at large? Is that the right thing to do? Those are the questions businesses will need to answer.

For example, the COSO guidance explores how AI can analyze microscopic images more precisely, to decide which chemical compounds are the best candidates for drug development. So will a pharma company entrust its product development strategy to AI? Would that mean the data scientists assuring the soundness of the AI code are more important than pharma scientists? Would you need to demonstrate the safety of your AI algorithm during an FDA inspection? What do you tell investors in the Management Discussion & Analysis in the 10-K? 

Or consider the ethical examples. My current favorite: Amazon won’t sell its facial recognition technology to police departments. It’s AI-driven technology that theoretically could be used for law enforcement, except society hasn’t yet fully defined how law enforcement should use a technology so powerful. Then again, that’s a very American view of law enforcement, AI, and privacy. China, with fundamentally different values, is going like gangbusters on facial recognition. How does a tech company’s board, beholden to multiple stakeholder groups, decide what to do?

Over the long run, those questions will be much more difficult to answer than concerns about data security or model validity. At least the latter can be addressed in an artificial test environment, where you run the data and see what happens. 

I’m not sure you can do the same with ethical and strategic questions that arise from artificial intelligence. Like, what’s the test environment you use to let autonomous drones decide to kill a suspected terrorist; or to let banking AI cancel a line of credit? At some point those potential uses have to enter the real world — or they don’t, but I’m rather cynical about humanity’s ability to restrain itself.

Leave a Comment

You must be logged in to post a comment.