Internal Audit in Tumultuous Times
Good news for internal auditors who feel overwhelmed, under-resourced, and frustrated that not enough people at your organization value your opinion — you’re not alone. GRC software vendor AuditBoard published a report last week that surveyed more than 200 internal audit leaders around the world, and apparently those feelings of existential dislocation are par for the course.
Technically, AuditBoard’s report was meant to capture the state of the internal audit profession as we head into 2026. That picture is good to know, since most internal audit teams right now are setting their budgets and audit priorities for the coming year. The AuditBoard report touches on issues such as how audit teams are using artificial intelligence, how the team is perceived by the rest of the business, and how much the internal audit profession will or won’t be transformed (mostly by AI) by the end of the decade.
The three big findings are as follows:
- Most internal auditors still feel trapped in their roles. Fifty-four percent of respondents said they aspire to be recognized by the rest of the business as trusted advisers on risk, but a majority also feel “confined by a compliance-driven identity.” Translation: you’re so pinned down in the drudgery of SOX compliance and financial reporting audits, you never have time to do the fun stuff like overhauling IT or procurement risks.
- Lots of teams aren’t yet sure how to handle AI. Only 28 percent of respondents were confident that they could audit AI risks effectively. Sixty-three percent said their organizations haven’t adopted a formal risk appetite or governance framework for AI, so internal auditors don’t have any structure for audit planning or risk monitoring. And most telling to me: only 39 percent expect AI to have a “transformative impact” on how they work by 2030.
- Budgets and staffing levels are largely staying flat, but with so many more risks that internal audit needs to address somehow (tariffs, workforce shifts, AI, sanctions, cybersecurity), even a flat budget still leaves you with too few resources and too much work.
So, yeah. Existential angst, AI risks running rings around you, and budgets tight as a drum. Good times.
Internal Audit’s Many Crises
AuditBoard describes the environment facing internal audit teams as hypervolatility, where “disruptions are not only constant; they are accelerating, interdependent, and self-reinforcing.” For example, your company might be creating new workforce risks as it lays off people under the premise that it can replace them with AI — except, thanks to tight budgets, you can’t easily assess the new risks introduced to the company by AI. Including the strategic risk that maybe swapping well-known human risks for unclear AI risks won’t be worth the cost.

Source: AuditBoard
I don’t dispute AuditBoard’s diagnosis of the problem. Internal audit teams do have too many overlapping risks they’re supposed to address, each one affecting the other and all of them rapidly evolving thanks to (a) breakneck evolution of technology; and (b) a schizophrenic regulatory environment pulling organizations in multiple directions at once. I even applaud coining the phrase “hypervolatility,” which is pithy, clever, and accurate all at once.
So what are internal audit teams supposed to do to address this predicament? How do you find a modus audiendi in such a chaotic world?
AuditBoard says internal auditors will need to “harness AI as a capacity multiplier” — which isn’t wrong, certainly; but is what you’d expect to hear from a vendor that sells AI-powered software to internal audit teams.
I’m more curious about the question that precedes AuditBoard’s recommendation: the capacity to do what, exactly?
For example, I’d say that what worries management teams most is disruption, foremost of the business processes they use to maintain operations and make money. So should you strengthen your capacity to prevent disruption, or your capacity to keep providing services even amid disruption?
If you say it’s the former, that sends you down one path of risk monitoring and access controls and zero-trust cybersecurity and stuff like that. If you say it’s the latter, you go down a different path of data backups and redundant systems and tests of business continuity plans.
AI will be able to help you go down either path (and in truth, I understand that internal audit teams need to travel both paths to at least some extent). But you still need to tie your AI adoption — your capacity building — to big strategic, operational risks that senior management and the board understand.
That’s how you move from the “compliance-driven identity” we all hate to the “trusted adviser” role we all want. That’s how you demonstrate that you know how to add value; and that, in turn, gets out of the budget doldrums.
About AI Confusion
So, back to only 28 percent of respondents saying they could audit AI risks effectively, and only 39 percent who believe AI will have a transformative effect on internal audit in the next five years. I’m not surprised by either of those numbers. Consider that…
- The macro-economics of AI are unproven.
- The regulatory framework for AI is undeveloped.
- The scalability of AI from nifty pilot project to enterprise-level efficiency remains unclear.
Those are some pretty significant un’s. Why would you rush into a technology transformation with so much uncertainty? Better to try pilot projects, move incrementally, watch the larger macro-economic fundamentals of AI (which stink to high heaven, in my observation), and focus on governance frameworks for the rest of the enterprise.
Indeed, the AuditBoard report says, “Technology itself has become a source of volatility, as rapid advances in AI continuously reshape business models, controls, and risk exposures faster than governance frameworks can adapt.” That’s wholly right.
The flaw in our thinking, however, is to view AI as some sort of arms race between internal audit monitoring risk and the rest of the business causing it: you need to adopt AI now to keep up with them adopting AI first and pulling away from the risk controls you have.
Does it really have to be that way? Lord help us. Strong senior management teams, guided by a thoughtful board, could build AI governance structures for the whole enterprise so that no part of the organization embraces AI so recklessly quickly that it slips loose of risk management controls. That’s the wiser way forward.
