Thoughts on Compliance, AI, and AML
Accenture published a report Tuesday speculating on the future of financial markets for the next few years, with some thought-provoking points for compliance officers mired in anti-money laundering compliance.
First, Accenture estimates the overall cost of risk and compliance for the financial sector at roughly $96 billion annually, and the cost of AML compliance specifically is somewhere around $10 billion per year. But the vast majority of that effort seems to be wasted; more than 99 percent of suspicious transactions flagged for review are false positives that didn’t need attention, while only about 1 percent of actual fraudulent transactions are stopped.
That puts compliance officers in a tough spot. You can’t improve AML compliance efforts without more investment in better technology, but convincing the CFO and the board on more investment for something that’s such a mess is a hard sell.
Moreover, exactly what new technology are you supposed to invest in? Technology infrastructure in the capital markets is notoriously complex and fractured; a large financial firm might have hundreds of separate IT systems cobbled together. Large companies in other sectors aren’t much better. Plus AML is a constantly evolving regulatory landscape, with new sanctions, suspicious persons, and fraud schemes emerging all the time. Figuring out where to place your compliance technology bets in that environment is tricky.
I can’t recommend specific software solutions, and the Accenture report doesn’t contain any either. It does, however, offer the following call to action that offers important clues about compliance technology capabilities — that is, what your technology solution will need to be able to do.
It is time to move all three lines of defense into the digital age. This will require that capital markets players adopt new technology to offset both machine risk (cyberattacks, algorithmic breakdown in trading or risk models) and human risk (conduct and compliance monitoring, anti-phishing security). It will also require a move from reactive rulemaking to forward-thinking change and safeguards, particularly around the emerging technology agenda.
So what does that paragraph tell us?
Getting to Normal
Start with artificial intelligence applied to AML and all those false positives. The goal for machine learning and AI is to reduce false positives, so that compliance staff performing intensive manual reviews can focus their energy on truly questionable transactions. The result might seem a bit counter-intuitive at first glance: compliance departments will get fewer suspicious transactions to review, but the transactions you get will be “more” suspicious. The AI will take care of those duds that waste your time.
This will work in two ways. First, the more data you feed into your AI program, the better it will become at identifying the true risk of a transactions. For example, plenty of false positives come from similar names that one person uses: John Public, John Q. Public, J.Q. Public, and so forth. Once we start getting into names translated from Arabic, Chinese, or other non-Roman alphabets, the picture gets complicated quite fast.
The more data your AI program can use to learn which customers use what names honestly, and which scam artists also use what names dishonestly, the fewer false positives your compliance staff will receive.
So that’s capability No. 1: your AI program will need to be able to consume vast quantities of data; and your compliance program overall will need access to all that data.
Some of that data might come from your organization’s own historical records. Other data will come from outside sources such as the Treasury Department’s list of Specially Designated Nationals or vendors’ own resources for politically exposed persons or scammers found via adverse media checks. Regardless, that’s what your AI program must be able to do if you want to make an investment like that.
Second, however, is that your AI program will need to understand what a “normal” transaction looks like for each customer. That is, not only will it need to learn that John Public and J.Q. Public have been your customer for years; it will need to know that, say, John Q. has never owned any real estate partnership, and 90 percent of his business travel is to New York and Chicago. So a sudden wire transfer to a real estate partnership in London is abnormal.
No, Really; It’s All About Normal
That concept of identifying a normal transaction goes way beyond AML compliance. As we keep moving into the digital world, this will be one of the most important risk management capabilities can have.
For example, I was intrigued when Accenture mentioned human risks, and specifically phishing attacks. Most compliance officers now know the cybersecurity risk of hackers posing as the CEO of the company and emailing someone in HR, “Please send me a spreadsheet with the W-2 information for all the employees we have.”
That is an abnormal request for most CEOs. The hackers hope that the employee receiving the email either doesn’t recognize the request as abnormal, or won’t challenge the request anyway. So the control against that cybersecurity risk is someone, somewhere in the company recognizing, “This is weird. I should confirm.”
I use that example deliberately. Clearly one way to implement that control is training: educating the employees in HR that the CEO won’t ask for W-2 information, so don’t send it before calling to confirm. But the hackers could also impersonate the head of HR, and suddenly a request for W-2 information might not be unusual. So the proper control control there might be rooted in technology: a database with tight access controls, where everyone in HR must access W-2 data themselves.
My point is that the same cybersecurity risk (say, sharing personal employee data) will need different controls for different groups, because they will have different perceptions of whether a transaction is normal. Which therefore drives up the importance of risk management functions (like compliance and audit) being able to define “normal” for different groups.
The same cybersecurity risk (say, sharing personal employee data) will need different controls for different groups, because they will have different perceptions of whether a transaction is normal.
AI might not fit all that well with our W-2 example. Then again, consider a large organization that uses contractors or consultants extensively. You might want to ban employees from surfing AirBnB on the corporate network, but a visiting consultant would normally want to surf AirBnB to find a short-term place to stay. Scale up that example to tens of thousands of people, and suddenly AI or other automated governance of network access makes much more sense,
All of it, however, depends on your technology systems tapping into large pools of data; plus a clear understanding of standard business transactions within your enterprise; and policies that address how those transactions can happen in a compliant manner.