FinCEN Gives Advice on Deepfakes
FinCEN has published an alert warning financial firms about deepfakes and other AI-driven fraud schemes, along with several suggestions for how firms could improve their policies and procedures to spot fakes and stay on top of your suspicious activity reporting obligations.
FinCEN published its guidance on Wednesday. It has no particular force of law, but the six-page missive is clearly meant to help compliance officers keep your anti-money laundering compliance programs sharp. In other words, if you blissfully ignore advice like this and keep falling for AI-enhanced scams, you could end up with a nasty-gram from regulators next time they review your program.
And while FinCEN’s guidance is targeted toward financial firms, compliance officers and internal audit teams from any industry would benefit from giving it a read. Let’s remember that earlier this fall the Justice Department said it expects corporate compliance programs to stay cognizant of misconduct risks driven by artificial intelligence. The department expressly said prosecutors will consider whether a company is vulnerable to criminal schemes enabled by new technology, and whether you’re improving your internal controls to keep pace with that risk.
The guidance begins with a quick review of the generative AI tools now widely available, and how those tools can be used to falsify the documents that banks have historically relied upon to verify a customer’s identity. For example, fraudsters have used gen AI to fabricate driver’s licenses, passport cards, and other photo ID forms of identification.
Sometimes criminals create the deepfakes by modifying an authentic image; sometimes they create an entirely new image from whole cloth. To make matters even more complicated, fraudsters have also combined those gen AI images with personal identifiable information (PII) that’s either stolen or entirely fake to create “synthetic identities” whose documentation seems all the more convincing.
Sniffing Out Deepfakes
So the fraudsters are using generative AI to make highly convincing fake documents. How can your customer due diligence program detect such documents?
Firms can start, FinCEN said, by conducting “re-reviews” of a customer’s account opening documents. For example, if you suspect a deepfake image, you could run reverse image searches or screen against other open-source research to see whether the image matches with known fakes. Firms can also use more sophisticated techniques such as examining an image’s metadata or using software designed to detect deepfakes or manipulated images.
OK, but those are all technology-driven solutions. Firms also need to train their due diligence analysts and other AML compliance staffers on certain red flags they should recognize. FinCEN offered a few examples:
- Inconsistencies among multiple identity documents submitted by the customer;
- The customer can’t satisfactorily authenticate their identity, source of income, or some other of their profile; and
- Inconsistencies between the identity document and other elements of the customer’s profile.
All good advice, but let’s consider what else you need in place in your customer due diligence program for those ideas to work. For example, to identify inconsistencies between the identity document and other elements of the customer’s profile, you’ll need to collect those other elements from somewhere. Well, where? From what external providers? How are you validating the quality of that data?
The issue here is that you need to rely on more than the customer-provided information to build a customer risk profile. That’s nothing new in customer due diligence circles, but deepfakes are going to make the other parts of your customer risk profile more important, since that other data will help you confirm whether the document you’re staring at is legitimate.
FinCEN provided a few other other red flags that due diligence teams should watch for, too:
- Access to an account from an IP address (say, Bulgaria) inconsistent with the customer’s profile (such as a home address in Florida);
- Patterns of apparent coordinated activity among multiple similar accounts;
- High payment volumes to potentially higher-risk payees, such as gambling websites or digital asset exchanges;
- High volumes of chargebacks or rejected payments; and
- Patterns of rapid transactions by a newly opened account or an account with little prior transaction history.
Again, all good advice; the question is whether your transaction monitoring systems can detect such activity and piece it all together quickly enough to intercept the fraudulent activity. Financial firms could use AI tools themselves to manage this challenge (there are plenty of AI vendors promising software that can do exactly that), but that means you need the executive will to make such an investment, followed by the rigmarole of implementing a solution that works for you.
Other Warning Signs
FinCEN also offered a few ideas for how to identify fraudsters in the account-opening process. For example, you could use multi-factor authentication, a wonderful failsafe that companies should use much more often. You could also use a “live verification check” where the customer must confirm his or her identity through a video call — because fraudsters will probably try to avoid such calls, which spawns another series of red flags you could watch for.
For example, the fraudster might claim to be experiencing repeated technical glitches, or ask to switch to some other form of communication. (That’s exactly how the cuckoos running Ozy Media tried to defraud bankers a few years back.) The fraudsters might also try to use third-party software that alters their appearance on a webcam; you could return fire with software that detects when such third-party plug-ins are in use.
We should also remember that fraudsters can use voice-cloning technology to impersonate executives at your own business, where they’d pressure an employee to make a wire transfer overseas or some similar scam. You could fight that by having strict policies on approvals for wire-transfers, right down to insisting on some sort of challenge question that the caller must answer correctly for the conversation to continue.
All in all, the FinCEN guidance is a good primer on the threat that deepfakes pose — foremost for AML compliance programs in the financial sector, but also for the corruption, sanctions, and fraud risks that all companies face overall. The real question is how quickly compliance and anti-fraud teams can invest in the right technologies, policies, and controls to fight AI-driven fraud, before it gets ahead of you.