Today we have another dispatch from this week’s ISACA-Institute of Internal Auditors GRC Conference, on a subject that gives compliance and audit professionals plenty of heartburn: emerging technologies. How can you apply GRC frameworks to assure that as those technologies spread through your enterprise, they don’t cause unnecessary risk?
That was the question for a conference session on Tuesday afternoon, with plenty of good advice.
The speakers (Stephanie Losi of 219 Labs and Annu Warikoo, executive director of technology governance at JP Morgan Chase) described the challenge as follows. A new technology comes along — artificial intelligence, the blockchain, advanced robotics, and so forth — and the business units flock to it, because new technology drives innovation. Too often, however, innovation outpaces your GRC controls.
That means your risks are under-controlled, and you need to bring your oversight mechanisms (policies, procedures, controls) back into alignment with the risk. That is what a well-designed GRC framework should be able to do.
What can go wrong absent that strong framework? Losi and Warikoo gave a few examples:
- Microsoft launching a beta-test version of its AI-powered Bing chatbot ast year. The chatbot soon started spouting belligerent and confused answers to its beta-test users, including journalists.
- CapitalOne suffering a huge data breach in 2019, because the bank had been storing its data on the Amazon Web Services cloud platform. A former AWS employee hacked into the system and stole the data. Banking regulators ultimately fined CapitalOne $80 million.
- A British bank (which one, I’m not sure) that attempted a major IT system migration in 2018. The migration hadn’t been properly tested, and locked millions of customers out of their accounts. British regulators fined the bank £50 million for inadequate testing.
All of those situations involved new technologies that had not been properly controlled before being unleashed upon regular business operations. GRC frameworks are supposed to guide that preparatory work, so the real-world disasters don’t come to pass.
What a GRC Framework Should Accomplish
A GRC framework should define several important principles for the adoption of a new technology, Losi and Warikoo said. Specifically, the framework must define:
- What is legally required when using the technology;
- What meets the requirements of your Code of Conduct;
- What addresses the risks of new technology as the tech is introduced into your business environment.
The first two bullet points are fairly self-evident. For example, if you want to use ChatGPT to help write reports, your framework should identify the legal requirements your company has for data privacy and confidentiality, so you can sit back and say, “OK, how can we use ChatGPT in manner that’s compliant with HIPAA, PCI DSS, and these other privacy compliance obligations listed on my screen here?”
Defining what meets the requirements of your Code of Conduct is more like defining fair and acceptable uses of the new technology. For example, if your Code talks about fair treatment of all stakeholders, then your framework might specify that new technologies cannot have a corrosive effect on certain populations — say, a social media platform not encouraging teen girls to develop eating disorders, or not encouraging white supremacists to storm the U.S. Capitol.
Those first two bullet points are more the domain of chief ethics and compliance officers. The third bullet point, on how to address risks of the new technology as you push it into your business environment, is more the domain of internal auditors or risk management teams. This is where things can get really interesting.
Take application development as an example. Back in ye olde days when humans developed software code, you could define and implement controls at the asset level (that is, the app). Today, if you use ChatGPT or some other generative AI to write software code, you’ll need to define and implement controls at the process level, to assure that the AI doesn’t take your application development over a cliff before anyone can stop it.
For example, if AI finds a bug in human-written software, what happens next? Does the AI fix the error automatically, or submit a ticket to the IT desk? If it submits a ticket, who approves that? Or if we hare-brained humans have pushed flawed software into a live environment, should your AI have the authority to roll that software back to an earlier, more reliable version?
“How much responsibility are we going to give AI?” Losi asked the crowd. “There should always be some level of oversight over AI, but what is that level?”
Losi was making a point I’ve raised before about artificial intelligence. Companies embracing AI need to define the human point: that point in a business process where AI ends and human judgment begins. The more you entrust to AI, the stronger your process- and even entity-level controls need to be. Wise audit and compliance professionals would use a GRC framework to help them appreciate those nuances in internal control and implement accordingly.
Where Do GRC Frameworks Come From?
GRC frameworks come from all sorts of places. You just need to find one that makes sense for your business.
For new technologies, the go-to resource for GRC frameworks is NIST, the National Institute of Standards and Technologies. NIST has long had frameworks for both privacy and cybersecurity, and more recently published a framework for artificial intelligence. Some businesses (mostly government contractors) are required to use certain NIST frameworks, but all the frameworks are also available to any organization that wants to use them on a voluntary basis.
Nor is NIST the only source for GRC frameworks. COSO, HITRUST, CMMC, ISO, and numerous other alternatives (all acronyms, apparently) all publish frameworks that you can use for technology adoption.
All that said, when you’re trying to integrate an emerging technology into your operations, a standard GRC framework might not be quite the right fit. In theory you could develop your own framework from scratch, although I don’t see much sense in re-inventing the framework wheel. Better to take one of the existing frameworks and tailor it to your needs.
One point that Losi and Warikoo stressed was that if you do develop your own GRC technology framework, be sure to integrate it into whatever enterprise risk management framework you might use for risk management generally. (That may sound complicated, but a battalion of GRC software vendors are eager to help you with tools to do this.)
If you have a technology framework and ERM framework operating independently from each other, you’re more likely to have duplicative controls, unnecessary bottlenecks, and irritated employees — who will then be more likely to use the new technology without telling you, and now the whole point of your GRC effort just went up in flames.
I would only add one final thought. With so many new technologies racing into adoption, and so many risks in tow, there are tremendous opportunities here for the right audit or compliance leader; you can participate in establishing the rules for how your company adopts this new technology wisely, legally, and profitably. You simply need to know how to raise the relevant questions about risk, and then have your recommended answers waiting in the wings.
That’s what a GRC framework helps you to do: sharpen your thinking, so you can make fewer mistakes.