Advice on AI Policies, Procedures
Today we return to artificial intelligence, with a look at the policies and procedures companies could use to govern the use of AI. State financial regulators in New York recently proposed guidance for AI in the insurance sector which goes into detail about policies and procedures — so let’s see what might be useful for companies of any industry.
The material comes from the New York Department of Financial Services (DFS), and so far it’s just proposed guidance. DFS is targeting the insurance industry because (1) insurers churn through vast troves of personal and financial data while calculating policy premiums; and (2) using artificial intelligence on all that data could lead to inadvertent discrimination against certain groups, which is what DFS wants to avoid.
The proposed guidance is open for public comment through March 17. DFS will then study those comments and publish final guidance sometime in the future. Exactly when that final guidance will arrive and what it might look like is anyone’s guess.
We’ve talked about AI many times before in these pages, and I often stress that the first step is to treat AI as you would any other new technology — which means, your company should have a process in place to govern the adoption of new technology. Which committee of the board reviews technology adoption? Which senior and operating executives evaluate new technologies?
Ideally, you should have an in-house committee of operating executives, technology personnel, and risk assurance types (legal, IT security, compliance, privacy). That committee should explore all the ways the new technology could help operations, all the ways it could bring new risk to organization, and what controls you’ll implement to balance between those two poles.
The DFS proposed guidance hits on all those same points, recommending “a cross-functional management committee with representatives from key function areas, including legal, compliance, risk management, product development, underwriting, actuarial, and data science, as appropriate.” Which is what I just said, but more boring.
OK, let’s say you have the cross-functional committee and the engaged board and all that high-level stuff. What policies and procedures — the mid-level stuff — should come next, to keep all those lofty AI ambitions on the right track?
Document Your Use of AI
That’s the big message from DFS: if your company is going to implement artificial intelligence into its operations, it should compile written documentation of how you expect to manage that AI usage. That should include…
- A description of how you identify operational, financial, and compliance risks associated with AI, and the associated internal controls designed to mitigate those risks.
- An up-to-date inventory of all AI that you are either using right now, is under development, or recently retired.
- A description of how each AIs operates, including any external data or other inputs and their sources, the products for which the AI is designed, any restrictions on use, and any potential risks and appropriate safeguards.
- A description of how you track changes to your AI usage over time, including documented explanations of any changes, the rationale for those changes, and who approved them.
- A description of how you monitor AI usage and performance, including a list of any previous exceptions to policy and reporting.
- A description of testing conducted periodically to assess the output of AI models, including drift that may result from the use of machine learning or other automated updates.
None of the above documentation requirements should be surprising. They spell out how you approach risk assessments, change management, testing, monitoring, and so forth — control activities that every company should already be doing, in a disciplined and methodical manner. You could substitute “ERP” for “AI” in those above six points and they would read like practices for a Sarbanes-Oxley audit.
Of course, here in the real world plenty of businesses either don’t have those internal controls in place, or aren’t following them diligently. Hence we end up with enforcement actions like what we saw against Rite Aid last month, when the Federal Trade Commission banned it from using facial-recognition technology. One of the FTC’s chief complaints: inadequate testing and monitoring of Rite Aid’s AI performance.
So now let’s say you’ve compiled all the documentation. What next?
From Documentation to Controls
After documentation, the DFS guidance next says that insurers should put their AI technology through its paces during development. That means using rigorous standards for model development, implementation, and validation. The company should also use independent review and “effective challenge” (there’s a wonderful phrase) to risk analysis, validation, testing, and AI development.
DFS doesn’t mention any specific framework to guide your AI efforts, but let me note that NIST unveiled an AI framework last year and NIST is about as trustworthy as you can get in this line of work.
As always, third parties are an issue here too. DFS says insurers must have standards, policies, and procedures for any AI they use that was developed or deployed by a third-party vendor. That includes procedures for reporting any incorrect information back to your AI vendors for further investigation and update, as well as procedures to eliminate that incorrect information from your AI.
The last significant directive is about the role internal audit should play. Insurers operating in New York are required under DFS rules to have an internal audit function; and this proposed guidance says that the internal audit team should assess “the overall effectiveness of the AI risk management framework.” That would include…
- Assessing the accuracy and completeness of AI documentation and adherence to documentation standards, including risk reporting.
- Evaluating the processes for establishing and monitoring internal controls, such as limits on AI usage.
- Assessing your AI’s supporting systems and evaluating the accuracy, reliability, and integrity of any external data used by the AI.
- Assessing potential biases in the data that could result in unfair or unlawful discrimination against insured groups.
Again, this all makes sense for any business using AI — although for many companies, it raises the question of whether you have IT audit personnel who can handle such sophisticated audits. If you do, they’re probably looking to jump to a vendor at double their salary.
Regardless, this proposed guidance offers plenty of food for thought about how to manage AI in your enterprise. Right now compliance and audit teams need to take useful guidance anywhere we can find it.