Is AI Leaving Your Internal Controls Behind?
So everyone is freaking about artificial intelligence and its rapid deployment throughout the corporate enterprise. That brings up an important question: are companies updating their internal controls fast enough to keep pace with that AI adoption?
This is on my mind because the other week Deloitte released the results of a survey that suggests no, we’re not.
The findings come from a webinar Deloitte hosted over the summer, with some 2,400+ corporate controllers and other mid-level to senior corporate financial types attending. Deloitte asked those attendees how often they review and adjust their internal controls, and the responses were… inconclusive, to say the least.
The good news is that 36 percent of attendees revisit their internal controls at least quarterly; 8.3 percent of the group review and adjust their controls every week. On the other hand, 22 percent said they review internal controls annually, and 32 percent — the largest single group of respondents — said they only review internal controls “as needed.” That could be anything from a long-planned upgrade of financial systems to the sudden realization that oh crap, employees are using some new tech we weren’t prepared to handle, so let’s confirm that our internal controls still work.
Artificial intelligence is definitely in the Oh Crap camp. Indeed, survey respondents listed AI as both the biggest risk to their internal control environment in the next 12 months (cited by 44.7 percent) and the biggest opportunity (cited by 35.9 percent).
I don’t doubt those numbers are true, but remember that they do not reflect the same thing.
When people say AI is a risk to the internal control environment, that depends on how others are using AI — and whether those new uses circumvent your internal controls. When people say AI is an opportunity for internal control, however, that depends on how you use AI to strengthen internal control.
So, really, there’s an arms race here. On one side are the business units in your enterprise, trying to use AI in new ways. On the other side is you, the internal control or compliance team, trying to assure that those new AI uses don’t trigger new instances of compliance, fraud, or operational risk.
A Governance and Controls Gap
Back to that Deloitte survey. First it found that a majority of companies revise and update their internal controls only sporadically, which is alarming enough. Then Deloitte asked survey participants whether their organizations have some sort of chief controls officer: a person focused on internal controls management, transformation, and governance.
Fifty-two percent of respondents said they did not.

Source: Deloitte
This is all sorts of not good. Artificial intelligence is a transformational technology that soon enough will sweep through the entire corporate enterprise. Management teams can’t allow that transformation to happen without oversight — and yet, at least according to this Deloitte survey, a majority of companies haven’t filled a critical role to keep that AI-driven transformation headed at least in a thoughtful direction, even if not the best one.
Indeed, that 52 percent figure surprised me so much that at first I wondered whether people correctly understood the question. Like, how are these organizations approaching Sarbanes-Oxley compliance right now, if nobody is managing internal controls? Shouldn’t someone be doing that already?
But even as that thought raced through my brain, I could hear the internal audit crowd’s response. Hold on, they’d all say, our job is to assess the state of internal control! That means we test controls, identify deficient ones, and recommend improvements to assure compliance with risk tolerance standards. Re-engineering internal controls to keep pace with how the company embraces AI is way outside our comfort zone, and frankly above our pay grade.
Those internal audit people living in my brain have a point. For the last 20 years, far too many of them have been burdened with the tedium of SOX compliance. Now AI barges onto the scene, sure to transform the corporate enterprise in all sorts of ways. It requires a different type of oversight and assurance than what internal audit normally provides.
So upon further reflection, maybe I shouldn’t be surprised that 52 percent of those Deloitte respondents don’t have someone in that role. But their absence is still all sorts of not good.
Building an AI Oversight Structure
The question facing companies today is how to build the right oversight structure as different parts of their enterprise find use cases for AI. Who sets down policy and procedure for how those AI-driven experiments happen? And as those use cases come into focus, who assures that your internal control system responds accordingly?
Maybe we could look to the finance team for leadership on this journey, since the finance team has undergone plenty of IT transformations over the years. But this transformation isn’t like most that have come previously. It will be far more complex than, say, moving from SAP to Oracle, or embracing automated invoice acceptance and payment processing.
The real challenge here will be understanding how AI changes your workflow procedures and your legal, compliance, and security risks. That’s beyond the expertise of most financial teams. Honestly it’s probably beyond the expertise of most technology and compliance teams too right now. Companies will need to bring together a group of people from across the enterprise to figure this out.
The chief controls officer cited by Deloitte will be one indispensable person within that group, but still only one.
The good news is that on a conceptual level, lots of the work in front of us has been mapped out to a fairly clear degree. For example, earlier this summer I heard a great discussion of how to apply GRC frameworks to artificial intelligence. The speakers identified three questions the framework should help you answer:
- What is legally required when using the technology;
- What meets the requirements of your Code of Conduct;
- What addresses the risks of new technology as the tech is introduced into your business environment.
Those points are entirely correct. And the issue of revisiting your internal controls to assure they’re keeping pace with AI (you know, the question that started this post in the first place) is captured in that final point above about how to address the risks of this new technology.
As usual, it’s all about having the right people, processes, and technology in place. I just hope management, compliance, and internal audit teams can assemble that lineup before the rest of the enterprise does.