Boring Lessons on Cybersecurity Controls

Last week the Securities and Exchange Commission dinged Morgan Stanley $1 million for poor cybersecurity controls. The case is an excellent primer on policy management, compliance, and cybersecurity risks, so let’s take a look.

The case centers on Morgan Stanley Smith Barney, one of the bank’s subsidiaries; and a financial adviser there named Galen Marsh. From 2011 to 2014, Marsh illegally nosed around the firm’s computer systems and downloaded personal information from roughly 730,000 Smith Barney accounts. Marsh stored that data on a private computer server at home—and overseas hackers broke into that home system, stole the customer data, and offered it for sale online.

Marsh pleaded to criminal charges last winter, and was sentenced to three years’ probation, a $600,000 fine, and a five-year bar from the securities industry. The lessons for compliance officers, however, are in how Morgan Stanley let this knucklehead go undetected for years.

We can start with “the paper part” of this mishap—the Code of Conduct and policies Morgan Stanley had to address employee use of customer data—because that seems to be the only part that worked well. Namely, Morgan Stanley did mention data privacy in its Code of Conduct, and did have policies directing employees on how to handle customer data. Marsh joined Morgan Stanley Smith Barney in 2008 and didn’t begin his adventures in downloading until 2011, after more than three years on the job. We can assume that at some point, he encountered written material from the bank warning him, don’t do this.

The rest of Morgan Stanley’s compliance program is what failed. Controls were not configured properly, or didn’t exist at all. Audits weren’t performed. Monitoring wasn’t done. We can find failures across multiple components of the COSO internal control framework, and that’s the real lesson here; for all the compliance community’s talk about the importance of ethics and training, hard-nosed controls still matter immensely.

According to the SEC’s litigation release, Morgan Stanley had two in-house websites that employees could use to access customer data. In theory, those portals should have restricted Marsh to seeing customer data only for financial advisers he supported. In reality, those portals didn’t interface properly with Morgan Stanley’s database that managed employees’ data access privileges. So Marsh could enter ID numbers for branch locations and financial advisory groups at random, until he found matches that let him access customer data he should not have seen.

Then came the second control failure. Once Marsh had all that unauthorized data, he had to get it off Morgan Stanley’s IT systems. He set up a transfer from the bank’s databases to his personal computer server, (I already checked; the site no longer exists.) Morgan Stanley did have software to block some websites from accessing its systems, but since was an “uncategorized” site, those filters didn’t work. In short order, Marsh had downloaded the data of 730,000 customers onto his home server.

copyAll this data was on Marsh’s computers by December 2014. Shortly after that, Morgan Stanley discovered the data for sale on various Internet sites, and matched the types of data posted online back to reports Marsh had pulled from the bank’s servers. The breach was discovered, the bank alerted authorities, and here we are today.

Remember the Boring Stuff

Unfortunately for Morgan Stanley, this case is a great example of how important mundane control and audit activities are. The punch in the SEC’s litigation release is this:

“[T]he authorization modules were ineffective in limiting access with respect to one report available through the FID Select Portal, and absent with respect to one of the reports available through the BIS Portal. Moreover, [Morgan Stanley] failed to conduct any auditing or testing of the authorization modules for the portals at any point since their creation at least 10 years ago.”

That’s a failure of effective control design—a particularly painful type of failure, since you need input from all three lines of defense to build a truly effective control, and none of that happened here. Did Morgan Stanley have any controls in place? Yes. Those controls simply had holes in them that Marsh found and exploited, and those holes should not have been there.

People will be quick to say the bank’s access controls failed, since Marsh gained access to data he shouldn’t have seen. Technically that’s true. But if you want to design effective controls from the start, then a more productive way to think about the task is to think about controls “at the border”—when one system touches another. Morgan Stanley’s database of customer data and database of employee access privileges didn’t interact correctly, and that was the root of the problem.

Granted, Morgan Stanley had other problems that didn’t help. At one point in Marsh’s tenure, he was promoted from assistant to financial adviser, but his access rights to data didn’t change. That’s a point IT systems often overlook: the principle of least privilege, where you only get access to IT systems that let you do your job—and often, getting promoted means you should lose some privileges. (Think about it: does the CFO of a billion-dollar enterprise really need to generate invoices?)

And then there’s that business of nobody auditing the access controls around those data portals for 10 years. That’s sloppy.

One task Morgan Stanley did seem to handle well enough was breach disclosure: finding the stolen data itself, alerting authorities, and matching it back to Marsh. The lesson here, however, is that good breach disclosure is no substitute for good breach prevention. That’s a dry, boring, technical challenge—and in today’s world, yikes, you’d better be good at it.

1 Comment

  1. […] June 14, 2016 | Boring Lessons on Cybersecurity Controls […]

Leave a Comment

You must be logged in to post a comment.