Repository of AI Risks Available
Some red meat today for everyone panicking about the risks (compliance or otherwise) of artificial intelligence: the brains at MIT have published a catalog of more than 700 potential risks from organizations’ use of AI, which you can use as food to fuel your AI risk management program.
A team of MIT researchers known as the FutureTech Group published the catalog, formally known as the AI Risk Repository, earlier this month. It’s free to all and designed to help a wide range of audiences, from academic researchers to policy makers to, yes, corporate risk managers trying to develop risk assessments for the AI systems running at your company. (Credit to compliance consultant Mark Rowe for noting the repository on LinkedIn earlier this week.)
The 700+ risks are organized into seven primary domains, such as discrimination, privacy, and system safety. Those seven primary domains are then split into 23 more precise sub-domains, which are divided again into even more precise risk categories.
The actual repository exists as a Google spreadsheet you can download, with various columns classifying each risk, describing its potential severity, identifying the potential cause (human versus AI itself; accidental versus deliberate action), and otherwise giving you a wealth of context. See Figure 1, below.
The repository also links each risk back to research papers that explain and document the issue so compliance officers, risk managers, and auditors can ponder the risk at length.
How to Put the AI Repository to Work
Organizations can put the repository to use in a few ways, such as:
- To conduct AI risk assessments;
- To identify new (and therefore previously undocumented) risks;
- To evaluate risk exposure and then develop mitigation strategies;
- To develop research and training related to AI use in your business.
A better question to ask is who gets to use the repository “first,” so to speak. That is, you can’t conduct a thorough AI risk assessment, which is something the compliance team would typically do, until you inventory all your AI risks, which is something internal audit would typically do. There’s a certain chicken versus egg dynamic here that the company should think through before you can make much useful progress.
Ideally, your company should first establish some sort of in-house AI steering committee — a cross-enterprise group who will decide how AI gets adopted across the whole organization. Exactly who sits on that committee will vary from one company to the next; but one reasonable lineup would be the chief technology officer as chair of the steering committee, with representatives from legal, compliance, finance, operations, and internal audit.
That whole steering committee could then look at this repository and say, “OK, this useful new tool exists to help us organize our AI risk management approach. How can our various risk assurance functions put this tool to best use?” That’s the conversation you want to have.
For example, one plausible path would be for the internal audit team (or an IT audit team, if you have one) to consult with the rest of the enterprise about potential use-cases for AI within your business; and then see which of the 700+ identified risks line up with those use-cases.
From there, internal audit could work with other business functions as necessary to recommend policies, procedures, and controls. For example, audit could work with the CISO on security issues, with compliance on regulatory issues, and operations teams on practical operations issues. (Say, how your AI chatbot will interact with customers.)
A Word on Compliance and Frameworks
Some compliance officers might be wondering, “What about the EU AI Act? That just went into effect at the beginning of August. Doesn’t it require companies to use frameworks to manage their AI risks?”
Yes it does, and that raises an important point about the MIT risk repository. The 700+ risks in the repository were pulled together from 43 separate risk management frameworks. So are the primary frameworks we’re using to assess AI risk today — the NIST AI Risk Management Framework and the ISO 42001 standard, for example — comprehensive enough to capture all the AI risks you actually have?
Maybe they’re not. Or more precisely, maybe those frameworks are good enough to help you define a process to assess and manage AI risk, but they don’t include enough actual risks that you need to assess and manage. So you’ll still need other resources to help you identify those actual risks — like, say, the MIT repository.
To that point, the news website TechCrunch had an interview with the lead author of the MIT repository, a FutureTech researcher named Peter Slattery. He had this to say:
“People may assume there is a consensus on AI risks, but our findings suggest otherwise. We found that the average frameworks mentioned just 34% of the 23 risk subdomains we identified, and nearly a quarter covered less than 20 percent. No document or overview mentioned all 23 risk subdomains, and the most comprehensive covered only 70 percent. When the literature is this fragmented, we shouldn’t assume that we are all on the same page about these risks.”
So when it comes time to comply with the EU AI Act (or other AI regulations that require companies to use frameworks), your company might need more than NIST or ISO frameworks to get the job done. You might need to cobble together material from a wide range of sources.
The good news is that companies still do have time to build their AI risk management and compliance programs. The bad news is that this is going to be a long, difficult, whole-of-enterprise endeavor. So pass off news of this MIT risk repository to whomever at your company might benefit from seeing it, because companies are going to need all the help they can get.