Every regulator and their uncle is climbing aboard the cybersecurity bandwagon these days. Before that bandwagon starts rolling away with itself, however, we might want to ask whether corporate audit and compliance teams, and even the regulators themselves, are going about all this in the wisest way possible.
Two recent posts on Radical Compliance capture the issue here. First was a post about whether companies might need to start making attestations to the effectiveness of their cybersecurity controls, to comply with the SEC’s proposed new rules for expanded disclosure of cybersecurity risk. The short answer is no, the proposals don’t expressly require attestations; but I’m hard-pressed to see how a large organization will manage such compliance without a system of attestation.
Second was a post about the Consumer Financial Protection Bureau warning technology companies serving the financial sector to do better at protecting customer data. One of the practices the CFPB recommended was an inventory of which IT systems have “dependencies” on certain software, including third-party software you use, so you can implement patches and software updates in a timely manner.
Well, that implies an ability to map out all the third-party software applications that touch your own IT systems, and how those interactions happen. You’d also need to monitor how those vendors patch and upgrade their own software, to understand whether the updated software will somehow change the interaction with your IT system.
Attestations and dependency mapping are both feasible ways to address cybersecurity risk — but over the last week or so, I’ve talked to several cybersecurity auditors who question whether those ideas are the best ways to manage cybersecurity risk. These folks raise good points that warrant further discussion.
An Alternate Theory of Cyber Assurance
Essentially, my IT auditor friends argued for a much more cybersecurity-centric approach to managing IT systems. Such an approach would support the paramount objectives of the board and management team: keeping confidential data secure, and business continuity intact. The drawback, they conceded, is that those efforts won’t necessarily provide the assurance that external auditors and regulators can fit into their existing structures for compliance.
For example, say you want to minimize the risk of hackers invading your IT systems via third-party software that you use. Maybe you use Workday or ADP for payroll management; or Zoho Salesforce for customer relationship management; or some other cloud-based tech provider that runs business processes for you. You want to be sure that those vendors don’t introduce a threat to your ERP data because they’d been hacked themselves. (This is exactly how the SolarWinds attack swept through the U.S. government and Corporate America in 2020.)
Those cloud-based vendors work as follows. Any time they need to access data in your ERP system, essentially they inject a small bit of code into your ERP system that says, “Please give me the following data files so I can execute a transaction.” But attackers could target your vendor and implant malicious code there, so now the message injected into your ERP system would say, “I’m going to leave this ransomware attack behind, to be activated next Tuesday.”
A corporate organization could guard against that threat by inserting a layer of cybersecurity software between those cloud-based vendors and your ERP systems that house all your confidential data. Then, whenever Workday or Salesforce or whatever try to access that ERP data, the cybersecurity software inspects the request. If it’s bogus, the transaction is halted and your security team gets an alert.
Really, the above idea is just change management. Software changes are made by code. So if you deploy a tool that scrutinizes every line of code that interacts with your ERP system, you have a more complete, accurate, and automated level of assurance over change management — assurance that also doubles as protection from third-party security risks.
The alternative is what many companies do today: the laborious process of identifying all vendors that access your ERP system (you won’t find them all), and then gaining some level of assurance from them through something like a SOC 2 audit (which might be incomplete or have the wrong scope).
Guarding your ERP data by inspecting every single transaction assumes that at least some vendors will be hacked or won’t have the latest patches necessary — but that’s OK, because your entire ERP system has been shrink-wrapped in an extra level of cybersecurity protection.
So what do auditors and regulators do with an approach like that?
Who Are We Assuring Here?
One problem with the above approach to cybersecurity is that it doesn’t give auditors much to audit. That is, typically they might audit your IT general controls by, say, testing your processes for software patch management or studying how you test the security controls of your tech vendors. But when you instead rely on a layer of cybersecurity to inspect every interaction regardless of the vendor’s security, those other IT general controls become less important.
Except, if a company dared to speak such thoughts aloud — “We don’t dwell on the vendors’ IT controls, and instead just focus on every individual transaction” — that’s auditing heresy. The audit-compliance industrial complex has evolved to assume that we must have controls to test, by collecting a sample of transactions and testing how well the controls work.
The “layer and inspection approach” to cybersecurity is more akin to exception-based auditing, where the company just monitors all transactions and flags all non-standard ones for attention. Audit firms don’t quite know what to do with that approach, and frankly, neither do regulators like the Public Company Accounting Oversight Board or the Securities and Exchange Commission. PCAOB standards are designed for a world of sample-and-test auditing, not data analytics. (In fact, prior PCAOB leadership said in 2021 that no new standards were needed for data analytics. That always struck me as nutty and I hope the agency’s new leaders will revisit the matter.)
Another thought: among the SEC’s proposals for cybersecurity disclosures is one requirement for companies to disclose their policies and procedures to govern cyber risk among third parties. The layer-and-inspection approach is a great example of what you could disclose on that point, if your company dared — but again, would the SEC staff and even executives themselves know what to do with a disclosure like that? Would it be sufficient?
Or would companies err on the side of caution, and revert back to what we know: attestations from the CISO, subordinates, First Line operating teams, third-party vendors, and others about the effectiveness of cybersecurity controls?
Because nobody ever gets fired for proposing yet another attestation to cover your corporate behind. But does that really mean it’s still the best way to do things?