The ‘Dual Crisis’ of AI-Driven Fraud Today
More glum news from the anti-fraud world: A new report says financial firms are getting hammered by rising levels of AI-enhanced fraud — but at the same time, consumers are embracing online privacy technologies that make anti-fraud efforts more difficult because firms can’t easily verify an online user’s identity.
So says Fingerprint, which on Tuesday released results from a survey of more than 300 anti-fraud managers in the United States. Granted, we should take this report’s findings with a grain of salt because Fingerprint sells software to address those identity verification challenges; but nonetheless, the findings echo what lots of internal audit and anti-fraud teams see every day, and are valuable food for thought as we try to develop anti-fraud processes that can keep pace with AI-driven threats.
Let’s start with what the Fingerprint report actually found. The key findings include:
- 99 percent of organizations report fraud losses from AI-enabled attacks in the past year, with an average of $414,000 per organization. One-third of respondents reported annual losses of up to $1 million.
- 44 percent said their teams now spend “significantly” more time on manual triage and investigation due to AI-driven attacks, and another 49 percent spend a moderate amount of time on manual effort. Only 7 percent said AI-driven attacks really haven’t affected their team workload.
- 27 percent said “privacy first technologies” (such as privacy-centric browsers or VPNs) severely undermine their fraud detection capabilities; and another 49 percent said privacy tools moderately undermine their efforts.
One can see the dilemma rising out of the data here.
First, artificial intelligence is allowing fraudsters to launch scams that are more convincing, and to launch them more easily; that means companies must do better at rooting out impostors through better identity verification processes. Consumers, however, are embracing more technologies that allow them to mask their identities, such as web browsers that block tracking cookies or VPNs that mask your true location when you visit a web page.
How are anti-fraud teams supposed to succeed in a world like that?
Tech Strategies for Effective Anti-Fraud
Another interesting finding was that traditional banks report a somewhat higher rate of AI-driven fraud attacks (54 percent) than fintechs (47 percent) do — but only 33 percent of banking respondents said they are considering AI-enhanced anti-fraud tools, compared to 52 percent of fintechs. That implies a technology gap between traditional banks and fintech firms: banks are suffering AI-driven frauds more often, but they’re investing in next-generation, AI-powered anti-fraud tools less.
Again, let’s beware that the sponsor of this report sells such tools and has a commercial interest in leading us to this conclusion — but that doesn’t mean the conclusion itself is fundamentally wrong. On the contrary, it reminds us that businesses shackled to legacy IT systems will have a harder time fighting AI-enhanced fraud than “digitally native” businesses, which many fintechs are.
So clearly one line of inquiry for most internal audit or anti-fraud teams will be to assess just how much your legacy IT systems do or don’t curtail your efforts at identity verification. You’ll need to consult with IT managers about the limits of those legacy systems. For example, ask about their susceptibility to phishing attacks and where those systems can or can’t integrate multi-factor authentication.
The challenge here is to figure out the best anti-fraud processes and tools given your fraud risks (Are you a fintech heavily into crypto? Or a community bank heavily into savings accounts for local retirees?) and your technology infrastructure. That answer will vary from one company to the next, but the right answer will always depend on you having a close, productive relationship with your CISO and IT chief.
Guidance Galore
The good news is that regulators and purveyors of risk management frameworks have been thinking about fraud, and specifically AI-enhanced fraud, for several years now. Lots of guidance already exists to help you think about these issues more efficiently.
For example, COSO released a fraud risk management guide in 2016 that includes five fraud risk management principles every organization should follow. Sure, the guidance predates contemporary AI by several years, but the principles still work. For example, consider Principle 3:
The organization selects, develops, and deploys preventive and detective fraud control activities to mitigate the risk of fraud events occurring or not being detected in a timely manner.
That means you need to build preventive controls, such as use of multi-factor authentication or transaction analytics to detect when a customer starts behaving in unusual ways. It also means you need efficient investigation protocols to isolate a possible fraud and determine what’s going on. Both those points were true before AI amplified fraud risk, and both points are still true now.
Or if you want more AI-specific guidance, FinCEN released guidance last year on how financial firms can fight deepfakes and other AI-enhanced frauds. That guidance included a series of red flags for possible fraud, including:
- Inconsistencies among multiple identity documents submitted by the customer;
- Inconsistencies between the identity document and other elements of the customer’s profile.
- Access to an account from an IP address (say, Bulgaria) inconsistent with the customer’s profile (such as a home address in Florida);
- Patterns of apparent coordinated activity among multiple similar accounts;
- High payment volumes to potentially higher-risk payees, such as gambling websites or digital asset exchanges; and
- Patterns of rapid transactions by a newly opened account or an account with little prior transaction history.
All good stuff; the question is how you design your anti-fraud controls to look for such red flags and then swing into investigation mode.
That’s not a new challenge for audit and anti-fraud teams per se. You need to sit down with leaders in the First Line business units to understand what they want to do with customers, especially for tasks such as opening accounts or moving funds from one account to another. You need to tell them what your fraud concerns are and what regulatory obligations your firm needs to meet. You need to consult with IT about what controls (multi-factor authentication, single sign-on, analytics) are possible with the IT you have — especially as customers keep embracing privacy technologies that will make your identity verification efforts more challenging.
AI just lights a fire under your butt to do it all more urgently.