Thoughts on AI From the Audit Perspective

The other week I had a post about the risk management challenges corporations will face as they integrate artificial intelligence into business operations. Several days later, my friend the Cybersecurity Auditor called me. “Dude,” he said, “I have many issues with AI and I think we’re missing another important point here.”

OK, I replied, and pulled out a notebook and pen. What else should the compliance and risk world be thinking about as we think about AI? 

My friend the Cybersecurity Auditor then delivered a 10-minute soliloquy about the perils of artificial intelligence from an internal control perspective — the nitty-gritty stuff about access control, change management, and audits of AI applications delivered by Software-as-a-Service providers. Essentially, he said, how can the human brains trust that the silicon brains are doing what they claim?

My friend is not wrong. On the contrary, he’s raising very valid questions about how businesses should put guardrails around AI, before the technology races ahead of your ability to govern it and causes all sorts of problems. 

For example, consider IT general controls that might apply to artificial intelligence. “ITGCs” are the controls that govern how IT is acquired, developed, managed, and maintained in your organization. The ability to create new user accounts, the ability to change permissions of what a user can do, processes to implement security patches and software upgrades: those are all ITGCs.

So how, my friend the Cybersecurity Auditor asked, would ITGCs work in an artificial intelligence world? He accepted that business units in the First Line of Defense, and even risk oversight functions in the Second Line of Defense, might both use AI-driven applications to do their jobs. But how would the internal audit function inspect those applications, and their effect on business processes? 

The Slippery Challenge of Auditing AI

I know, I know; some of you will say that an auditor can test the access controls for these applications; or look at change management logs from the software development team; or conduct other tests to see whether the results from an AI-driven application match historical norms. That’s the whole purpose of an IT audit function, you’ll say.

That might be true in theory, but I can see lots of obstacles arising in practice. First, your business might not have an IT audit function. Second, the IT auditors you do use might not have the expertise to assess AI issues. (“There is a skills gap here, for sure,” my friend said.) 

Third, even if you do have skilled IT auditors on staff, lots of artificial intelligence today comes from SaaS providers outside your organization. So how are you going to audit their ITGCs for the AI they’re developing and then unleashing on your business? 

My friend gave the example of data analytics delivered by SaaS providers. “There’s a set of out-of-the-box functionality that exists in a lot of these tools today for ERP systems,” he said. “I’ve never known an internal auditor to stop for a second and think: How do we know that code is good? How do we know that code is doing only what it’s supposed to be doing?”

I hate to be the one who sets off a fart in church, but my friend is correct. Right now, far too often, businesses don’t know that the code they’re using will only do what it purports to do — and the assurance needs involved here will only get more challenging as the code, and the artificial intelligence, grow ever more complex.

For example, say your finance team uses an AI-powered application to perform transaction monitoring and analysis, and an external audit firm wants to audit that process. 

  • What would the audit firm audit, exactly? The code itself? The ITGCs around development of the code? Both? 
  • Is the code proprietary? Do you own it, or is it rented from a SaaS?
  • Does the audit firm even know how to audit AI code? (“Nope,” my friend says with full confidence. I believe him.)

That same basic challenge exists across all the ways a large enterprise might use AI, and holds true whether the audit comes from an external provider or your internal audit team. 

Now let’s get to the really unsettling stuff.

Bad Code, Bad Data, Bad Judgment

My friend the Cybersecurity Auditor has worked both as an internal auditor and a Big 4 external auditor — so as one might expect, he had several beefs with how the audit firms perform their audits these days.

First, he said, as audit firms move ever closer to embracing data analytics and artificial intelligence themselves, people should understand what those firms are really doing. They are using tools that inject code into your ERP system, to extract the data the audit firm needs. (My friend named specific audit firms and the internally developed tools they use, but I won’t name names here.)

Well, my friend said, is anyone inspecting those tools? Because if a tool can extract data from your ERP system, it only takes a few modifications to that code for the tool to change data, or to extract more data than it should. And yet, “I’ve never heard that question asked in my entire career,” he said.

Moreover, how do we know that those tools truly are extracting complete and accurate data for analysis? My friend the auditor is deeply skeptical that the audit tools are doing that. He has seen different firms’ tools crunch the same data from the same ERP system, and get substantially different results. That shouldn’t happen. 

Or say the audit firm changes the code in its tool, because it wants to hit a new table in your ERP system. That might be a good idea, but consider the implications. “How do you justify that the result is complete and accurate, if they’re magically finding new [stuff] to hit with their tool?” my friend asked. Again, he’s not wrong.

I understand that ethics and compliance officers might wonder how all this matters to your world of third-party due diligence, communications surveillance, travel & entertainment expenses, billing code compliance, and the like. This does matter, because my friend’s fundamental point is that we don’t yet have sufficient audit skills and procedures to govern sophisticated technology, even as corporations rush headlong into using SaaS-based tools, data analytics, and artificial intelligence for every business process we can find. (Hence COSO published iguidance for risk management related to artificial intelligence last month.)

My friend talks in examples of financial and security audits because that’s what he knows, but his point applies across the whole enterprise. It’s an issue every business and board needs to ponder as we keep going further into the AI world.

Perhaps Siri and Alexa have some suggestions.

Leave a Comment

You must be logged in to post a comment.