AI vs. the Three Lines Model
Anyone who enjoys pondering the future of the internal audit and compliance professions may want to take note of a debate that erupted online last week about the Three Lines of Defense. It’s a fascinating discussion about how artificial intelligence might affect the Three Lines model, including whether AI might render the whole idea obsolete.
The instigator of this debate is a man named Tom McLeod, a long-time auditor in Australia whom I’ve had the good fortune to interview several times over the years. McLeod posted a witty mock obituary of the Three Lines model, announcing that it passed away in a Manhattan boardroom last week due to “algorithmic obsolescence” — meaning, AI.

McLeod
His obituary went on to say that a Fortune 100 conglomerate “quietly removed the framework from its risk manual, substituting the phrase ‘continuous autonomous assurance. Within hours, several peers followed suit.” (I don’t know whether that claim is true, and I’m not sure I want to know; but it’s entirely believable.)
Then came McLeod’s kicker: “No constituency feels the loss more than Internal Audit, custodian of the now-vanished third line. Independence, once defined by distance, must be re-imagined when ownership, oversight, and assurance reside in the same chatbot.”
His post promptly racked up hundreds of responses, so clearly it touched a nerve. Some folks say he’s right and that the Three Lines model is circling the drain, which will be terrible; others say he’s right and that the demise of the Three Lines model is long overdue anyway; some say he’s wrong and the Three Lines will endure.
So is McLeod right that AI will render the Three Lines model obsolete? If so, why is he right; like, exactly how will AI undo the Three Lines model? And if that happens — to be clear, I’m not convinced it will — how should internal audit and compliance professionals consider the implications for your future career paths?
How AI Might Hit the Three Lines Model
To understand McLeod’s thesis, let’s first remember how the Three Lines model is supposed to work:
- Operating unit leaders in the First Line are responsible for assuring that risk controls are implemented and followed properly;
- Risk assurance leaders in the Second Line (compliance, IT security, legal, HR, accounting) are responsible for deciding what those controls should be, in consultation with senior management and the First Line;
- Internal audit in the Third Line is responsible for testing the controls, assessing whether they’re designed and functioning properly, and watching for any new risks that might need attention from the First and Second lines.
In theory, AI could fulfill all of those tasks. So what would compliance and audit leaders then do in that world?
For example, there are GRC vendors today who promise that their AI tools can prowl the internet for new regulations, assess how those regulations affect your business, find the specific policies or controls you have that are no longer in alignment with the new regulation, and then recommend new language or control changes to bring your business back into compliance.
If a company does embrace that AI-driven approach to regulatory change, policy management, and controls design, exactly what is it automating? The answer seems to be that we’ve automated some of the compliance duties in the Second Line (regulatory change and policy management) and some of the duties in the Third Line (controls assessment and design).
So did AI erase two of the three lines? Did it blur them together? Did it do both things at the same time? I’m not sure, but clearly using AI in this way (and it’s not an unreasonable way to use AI) does call out cracks in the traditional Three Lines model. We’d be foolish to pretend otherwise.
Now add all the other AI use cases into this picture. Suppose you have AI running First Line tasks in marketing, such as generating and sending email campaigns to customers that need to follow strict tele-marketing rules. If one AI system is looking for regulatory changes to tele-marketing, and another system is updating policies to reflect those changes, and the original First Line AI systems now follow those updated rules — where are the Three Lines at all in this scenario?
They might be three separate AI systems, but they’re still working together seamlessly and automatically. I’m not sure that translates into “three lines” like we’d associate with humans.
Back to Human Implications
Before we get too carried away, we should keep this discussion grounded in the real world. Regulators, auditors, business partners, and investors will all have an interest in how a corporation handles risk management too, so let’s consider the question from their perspective.
For example, one could easily imagine that regulators (especially banking regulators or privacy regulators) would have lots of questions for a company that quietly tiptoes away from the Three Lines model toward some AI-driven model where the three lines are, for all practical purposes, woven into a single, automated Super Line.
Think of all the regulatory settlements we’ve seen in the past where the company in question (typically a bank) would need to improve its risk management framework. Even if artificial intelligence might strengthen your risk management processes, how do you document your decision to make that shift? How do you square it with industry regulations that might require an independent internal audit function? What is that independent human in the Third Line supposed to oversee, if the AI is monitoring, assessing, and intercepting risk all the time?
I posed all these questions (and McLeod’s obituary) to a good friend who’s a veteran cybersecurity auditor. He wasn’t buying any of what McLeod was selling. My friend the cybersecurity auditor noted that even if AI runs huge parts of your risk and compliance apparatus, humans still need to assess whether the AI system itself is running properly.
So the internal auditor in this scenario would need to devote more attention to the IT general controls governing the AI model, the data validation processes that guide how the AI is trained and learns, and cybersecurity precautions to make sure that hackers don’t somehow poison the AI by feeding it bad data. In other words, the internal auditor would still have plenty to do.
The question is whether internal audit would still do that work from its independent perch in the Third Line; or as part of some “general managerial loop of assurance,” as McLeod calls it, that no longer fits within the Three Lines model.
Nobody knows the answer to that question right now. But we’ll need to figure out that answer soon, or else we risk learning the answer the hard way.
