A Telling Move by CFPB

Compliance officers who want to skate where the puck is going to be, cast your eyes to the Consumer Financial Protection Bureau. Last week the agency warned companies to tread carefully when using artificial intelligence and other digital tracking to make employment decisions, and in doing so dropped two telling hints about the future of corporate regulation.

First let’s review what the CFPB actually did. Last Thursday the agency published guidance warning companies that if they use “consumer reports” provided by third parties to assess their employees — including reports such as background dossiers or AI-driven scores about worker performance — then those companies must comply with rules promulgated under the Fair Credit Reporting Act. That includes obtaining worker consent, providing transparency about the data used in adverse decisions, and allowing workers to dispute inaccurate information in those reports. 

The CFPB has long had power under the Fair Credit Reporting Act to regulate credit reporting agencies that collect data and crunch numbers to analyze a person’s behavior, to help companies make decisions about whether to extend credit to consumers. Those same reports and technologies, however, can just as easily be used to help companies reach decisions about hiring, firing, or promoting employees. That’s the part that makes the CFPB uneasy.

“The kind of scoring and profiling we’ve long seen in credit markets is now creeping into employment and other aspects of our lives,” CFPB Director Rohit Chopra said when issuing the guidance. “Our action today makes clear that longstanding consumer protections apply to these new domains just as they do to traditional credit reports.”

So at the surface layer, we have a new headache for HR teams at all companies. They, along with the compliance and legal functions, need to review whether your company does use data from a third-party provider that might qualify as a “consumer report.” If you do, then you’ll need to design your hiring, promotion, and other personnel policies to be sure you build those requirements for disclosure, transparency, and error correction into your HR process.

That’s the obvious stuff. Look deeper, however, and you’ll see that this guidance from CFPB is saying much more than that.

First, the AI Angle

What the CFPB is really driving at with this guidance is worker control over the data collected about him or her. The agency wants to give workers at least some power to resist intrusive data collection and automated, AI-driven decision-making about that worker’s fate. 

The true issue here is automated decision-making. That’s what the CFPB is trying to regulate — and it’s what many other government agencies around the world are trying to regulate, too. If you want to ponder how governments might try to address the abuses of artificial intelligence, start here. 

For example, in Europe, automated decision-making about a person without that person’s consent violates Article 22 of the General Data Protection Regulation. So even as compliance officers and corporate lawyers wait for specific regulations stemming from the EU AI Act, adopted in 2023, let’s remember that improper automated decision-making is already a violation of the GDPR, regardless of whatever the EU AI Act might specify sometime in the future.

Here in the United States, the Biden Administration first addressed AI and automated decision-making with its Blueprint for an AI Bill of Rights unveiled in 2022. That blueprint directed all government agencies to address AI according to five principles. Two of those principles were (a) protections for algorithmic discrimination; and (b) notice and explanation of when AI is being used “and and why it contributes to outcomes that impact you.”

The CFPB’s guidance on consumer reports is a direct reflection of those two concerns. Its purpose is to help workers understand when AI might be used in a way that affects them, and to give them an ability to view and correct the data feuling the AI’s decision-making processes. 

What might all this look like in practice? Consider the example of a trucking company from the CFPB:

For example, the developer of a phone app that monitors a transportation worker’s driving activity and provides driving scores to companies for employment purposes could “assemble” or “evaluate” consumer information if the developer obtains or uses data from sources other than an employer receiving the report, including from other employer-customers or public data sources, to generate the scores.

So if the trucking company only used its own internally generated data to evaluate employees, that’s fine. But if the company used any external data — which could be anything from GPS data on the employee’s phone to people who call those “Am I driving safely? Call this number” bumper stickers — then you’d need to get the employee’s consent to be tracked and give him or her a chance to review the data.

And the CFPB is only one agency among many tackling AI. More will follow, and automated decision-making seems like a primary path they’ll tread.

Second, the Regulatory Angle

Some people might reasonably ask — can the CFPB actually do this? Isn’t the Consumer Financial Protection Bureau supposed to address, ya know, consumer protection from predatory financial practices? How did we get from there to labor issues that apply to all companies across the board? 

I’ll be the first to stress that I am not a lawyer, and I don’t know all the nuances of the Fair Credit Reporting Act. I’m not sure whether some conservative group could file a lawsuit to overturn this new CFPB rule.

Except, the CFPB didn’t adopt a new rule. It issued guidance — which is the other telling detail compliance officers need to ponder here.

By issuing guidance, the CFPB has sidestepped those possible lawsuits; there is no rule to challenge. The guidance simply states the enforcement authority the CFPB believes it has, and nobody can challenge that until the CFPB takes an actual enforcement action on those grounds.

Could a company facing that enforcement action challenge the legitimacy of the CPFB’s guidance? Sure, and if they draw a Trump judge, perhaps they’d win — but that will require careful consideration from the legal department, to decide whether fighting is worth the cost. Usually it isn’t. Usually companies settle, and the precedent is now set for the rest of us.

This is regulation by enforcement, people. It’s an entirely predictable result of the Supreme Court rulings earlier this year that weakened agencies’ ability to issue new rules, to the point that in many instances it won’t be worth the bother. Just declare what you believe the scope of your enforcement authority is, bring enforcement actions, and let those cases become the standard that other companies (and their compliance teams) assume as “normal.” 

This is a stupid way to go about things, and some conservatives have complained about it for years. Hester Peirce, a Republican commissioner on the Securities and Exchange Commission, calls missives like this “regulatory dark matter” and she does have a point. Everyone would be better served if regulatory agencies published formal rules for notice and comment before final adoption.

Here in the real world, however, the Supreme Court’s blissfully deluded rulings made that almost impossible. So we’re going to see more regulation by guidance and regulation by enforcement, with compliance professionals left to read tea leaves rather than concrete rules. 

Leave a Comment

You must be logged in to post a comment.