The Consumer Financial Protection Bureau has fined Bank of America $12 million for failing to supply accurate data about home mortgage applications, in a case that provides some fascinating lessons about the importance of data analytics and auditing your compliance program.
The CFPB announced its enforcement action on Tuesday, and at first glance the case seems like a bit of a snoozer: Bank of America violated the Home Mortgage Disclosure Act, a law on the books since 1975 that requires mortgage lenders to collect demographic data about home loan applicants and report that data to various federal agencies. Bank of America neither admitted nor denied the allegations, agreed to pay the fine, and received a cease-and-desist order. Zzzzzz, right?
Perhaps not. Dig into the details of the settlement order, and you can see how data analytics, auditing, and monitoring all play a crucial role in assuring compliance with a regulation like this. Given that so many other business sectors have similar obligations to collect and report lots of data to regulators, maybe this case isn’t so obscure after all.
First, the background. The Home Mortgage Disclosure Act requires mortgage lenders to ask each applicant for their race, ethnicity, and sex — but applicants are not required to provide that information in return. If they decline, the bank is supposed to record and report that the information wasn’t supplied by the applicant.
Throughout the 2010s (the period covered by the CFPB order), Bank of America employed roughly 4,500 loan officers who received an average of 300,000 loan applications each year. That loan department operated in two groups: a centralized lending channel, directly managed from BofA’s headquarters in Charlotte; and a distributed channel, where loan officers were stationed at local branches across the country.
When applicants contacted either team by phone (and roughly 75 percent of all applications were taken by phone), the loan officer had to recite a script asking the applicant for his or her demographic information. If the applicant declined to say, the loan officer recorded declination instead.
Seems like a fairly straightforward process, right? What could go wrong?
‘Information Not Provided,’ Indeed
We can start with the obvious: that reciting the same script to home loan applicants over and over sounds boring as [expletive]. So a certain portion of BofA loan officers just didn’t bother doing it; they simply marked the application as “information not provided” and moved on to the next caller.
The intrigue is in how BofA determined this bad habit.
In 2013 the bank performed an analysis of its distributed lending channel and found that those loan officers had an information-not-provided rate of roughly 13 percent, compared to only 10 percent at other large banks. In response, BofA created a monthly report to monitor the information-not-provided rate among those distributed-channel loan officers. Lo and behold, the rate dropped from 13 percent in 2013 to 6 percent by 2016.
That’s our first analytics lesson right there: analytics is how you put the saying “what gets measured gets managed” into practice, and you often have all the data you need right in front of you. You just need to know which data, when analyzed properly, will give you the insights you need to drive behavioral change.
Alas, Bank of America stopped its monitoring protocols that same year. The information-not-provided rate promptly began ticking back upward, from 6 percent in 2016 to an uncomfortably high 17 percent by 2020.
BofA undertook another review of its data collection practices. This time around, it found hundreds of loan officers whose information-not-provided rate was 100 percent — meaning, the loan officers weren’t bothering to ask any applicants for the data at all — for at least one quarter from 2016 to 2021.
So Bank of America re-instituted its old control of a monthly monitoring report, plus new training for loan officers about the importance of collecting data properly. The information-not-provided rate quickly dropped back to those lower levels from 2016, although some loan officers still continued to mark applications as “information not provided” without ever actually asking for it.
And how did Bank of America know when its loan officers weren’t asking for the data? Because in 2021, the bank began recording the voice calls of the loan officers. It also implemented a new rule that any information-not-provided application had to be flagged as such. Together, those controls made it much easier to audit compliance with the data collection rule and to find those recalcitrant loan officers.
That’s our second lesson about effective analytics: it works best when running nonstop, so it can act as a monitoring mechanism. When you turn it off, bad habits can return.
We should also appreciate how internal controls, auditing, and analytics all support each other here (by the end of things, at least). Bank of America used analytics to uncover non-compliant behavior. It implemented new controls to enforce higher standards, and new controls that made compliance more audit-able (requiring all information-not-provided applications to be flagged as such), so that future analytics could be even more effective.
That’s how it’s supposed to work.
Making Analytics Work
Now let’s connect all this to the Justice Department’s calls for companies to use data analytics in their compliance programs.
First, contrary to what people might assume, the department’s guidance on effective corporate compliance programs does not say much about data analytics; the word “analytics” doesn’t appear anywhere in the guidance at all. Instead, the guidance only stresses the importance of compliance officers having access to whatever data they need.
We all obsess over data analytics because Justice Department officials mention data analytics elsewhere, such as in speeches and enforcement orders. For example, in the Albemarle FCPA settlement announced in September, the company won credit for, among other things, “using data analytics to monitor and measure its compliance program’s effectiveness.”
That last part is where Bank of America both did and did not go astray. It used data analytics sometimes to identify compliance shortcomings, such as in the mid-2010s; but it didn’t use data analytics at all times, such as when it ceased its monitoring in 2016 and the number of information-not-provided applications returned to unacceptably high levels.
Analytics isn’t something compliance officers should use only at specific points in time, to assess compliance behavior at that moment. Ideally, analytics should run at all times, to monitor compliance behavior and identify when that behavior drifts into a red zone. Then you can address that problematic behavior with new training, policies, approvals, or other controls.
In other words, effective data analytics can give your compliance program true risk management capabilities. You can help management to avoid problems, rather than simply be a cost center that documents when compliance-related problems exist.
By the way, if you want to talk more about data analytics, it so happens that I’ll be moderating a webinar on the subject on Dec. 7. Sponsored by Ethico, free to attend, with some great speakers lined up to offer their insights. Register today and join us next week!