Implementing Compliance for Robo-Advisers: 4 Points
Well, so much for automating your investment advisers out of a job and not worrying about that part of the business any more.
Last week FINRA published new guidance about robo-advisers, or in agency parlance, “digital investment advice”—the automated investment tools financial firms offer to customers so they can self-pilot stock portfolios, IRA accounts, and the like. These tools can be wonderful, as anyone who likes to play with Quicken or e-Trade can attest.
They can also be risky, especially if your firm acquires robo-adviser software from a vendor who may be strong on technology but weak on fiduciary duty; or if you are an investor with special needs that a robo-adviser doesn’t know how to process—and consequently matches you to a risk profile that isn’t right, or perhaps offers you services you don’t actually need. Still, robo-advisers are only going to become more common among broker-dealers and investment advisers, so FINRA decided to examine firms’ use of robo-advisers so far and issue some, ahem, helpful hints.
The good news for compliance officers pondering how to worry about robo-advisers is that the FINRA guidance hits key themes you already (should) know well from your days working with humans. What’s important?
- Getting an accurate assessment of a customer’s risk tolerance;
- Giving a clear disclosure of any conflicts of interest;
- Imposing proper supervision of the use of robo-advisers;
- Understanding the assumptions behind the models robo-advisers use.
Phrase the concepts that way, and they read perfectly well even if you delete the “robo” part. They drive at what FINRA and other securities regulators always want to see: thoughtful governance and investor protection, applied to whatever new technology the firm wants to use.
If that’s the goal, then compliance officers can start with four basic questions.
Do you understand what information should be collected from customers to meet FINRA’s concerns about investor protection, and how to collect it? FINRA found that most firms using robo-advisers have five to eight basic investor profiles, and attempt to match every customer to one of them. The key criteria are customer’s age, income, assets, tax status, investment time horizon, and so forth—nothing that would surprise you, and pre-existing FINRA rules already tell firms what you should be doing on this point.
The challenges will be designing robo-advisers so they can ferret out unusual situations (a customer’s investment objectives conflicting with his spouse’s, for example), and so they can monitor a customer’s changing needs over time. An experienced human adviser might have an instinct for asking better questions; poor coding might lead a robo-adviser to rebalance a customer’s portfolio simply based on his or her age. How can your robo-advisers anticipate those problems?
Do you understand what needs to be disclosed about robo-advisers and conflicts of interest? Robo-advisers can reduce the potential for employee vs. client conflicts of interest, yes—but they still can allow for firm vs. client COIs. This is especially true if your firm uses third parties to deliver products your customers purchase, or if the robo-adviser itself is operated by a third party. (And let’s be honest: a pop-up window on a computer screen disclosing a COI is much less effective than a human telling a customer, “You should be aware that…” Meaningful disclosure of COI matters.)
Do you understand where the assumptions behind a robo-adviser’s algorithms come from? Perhaps you don’t need to know the original code guru who wrote the program, but remember that all algorithms are based on assumptions. If those assumptions are inaccurate at the start, they can magnify into large errors over time.
You might, for example, end up excluding a class of customers from investment opportunities they’re entitled to consider, or including customers who shouldn’t be. And in today’s high-speed, high-automation world, you might not catch that error until after the damage has been done. That means more precise algorithms, and more preventive controls.
How do the firm’s human advisers use the tools with customers? This may be the most important question, because it gets to how the firm supervises its use of robo-advisers. FINRA’s guidance states the point elegantly: “Developing an understanding of the algorithms a tool uses would also include understanding the circumstances in which their use may be inappropriate.”
Some firms may have employees use robo-advisers with customers, rather than let robo-advisers do the job entirely. Others may have humans intervene when certain red flags are raised that a robo-adviser can’t or shouldn’t answer alone. Whatever the circumstance at your firm, FINRA will want to see that you have a logic to your thinking and a priority on investor protection. It gets to issues of training, oversight, and even “human override” when the robo-adviser ceases to be the best tool for the investor to use.
After all, not every computer program is smart enough to beat us at Jeopardy, Go, and financial analysis—at least, not yet.
1 Comments
Leave a Comment
You must be logged in to post a comment.
[…] and plans to hold a summit meeting on fintech in June; but by no means is the OCC alone. Last month FINRA published guidance on how firms should govern robo-advisers dispensing automated financial advice. Securities and Exchange Commission chairman Mary Jo White […]