Google’s Perfect Privacy Breach
You gotta give Google credit. Only that outfit, with perhaps the most intellectually talented employees in Corporate America today, could come up with a data breach so maddening and thought-provoking for corporate compliance officers.
The breach was discovered last March. A glitch in the Google+ social media network had exposed the personal data for nearly 500,000 Google+ users to hundreds of third-party software developers who built apps to run on the Google+ network. Engineers believe the glitch had been in existence since 2015, and can’t rule out that more exposed users won’t be discovered in the future.
As breaches go, what Google experienced was no disaster. The exposed data included names, email addresses, work histories, photos, and birth dates; but not credit card data, Social Security numbers, or other truly sensitive information that can ruin a person’s identity in short order. And 500,000 users isn’t a lot of people any more.
The controversy is that Google decided not to disclose the breach because it couldn’t confirm that any user’s data was actually stolen. It could only determine that the data was available for perhaps 430 third-party developers who might have seen it, even if a Google+ user had marked that data as private.
So in the strictest sense of breach disclosure laws, Google didn’t experience a breach at all, and therefore had no duty to disclose. An internal review committee decided the company should keep quiet, partly because at that time in March, we were all seething over Facebook sharing user data with Cambridge Analytica, and Google didn’t want the bad publicity. That decision went all the way to CEO Sundar Pichai, who agreed.
All this eventually came out, of course. The Wall Street Journal reported the event — What do we even call this? An exposure? A phantom breach? — earlier this week, just as Google announced it was planning to shut down Google+ and restrict data access for third-party developers anyway. Google then put out a statement and braced for impact.
So this whatever-it-is that Google experienced but didn’t report is perfect. It showcases all the ways compliance and regulatory practices around data breaches don’t quite work.
The Compliance Shortcomings
What intrigues me most is how Google couldn’t determine whether any user data had actually been exfiltrated. Could Google identify the affected users accurately? Was there any evidence of misuse? Could an affected user or developer take any action in response? According to a statement Google gave to the Wall Street Journal, the answer to all three questions was “no.” So with no way to quantify potential harm or to give an effective response, Google had no duty to disclose.
That answer might pass legal muster under the language of breach disclosure laws. It might even be true: perhaps no Google+ users suffered any harm in this breach. But it also opens the door to more questions about how strong a company’s cybersecurity controls (both preventive and detective) actually are — and whether those controls are strong enough that consumers should accept a “we don’t know” answer when a company gives it.
For example, Google couldn’t determine which Google+ users were affected and what data may have been stolen, because it had a limited set of activity logs. Well, a consumer might think, why is that my problem? Why didn’t you maintain activity logs in sufficient detail to find those answers?
Because, a security auditor might reply, our security protocols only need to provide reasonable assurance against a breach; if we had to provide absolute assurance, we’d never be able to accomplish anything.
Google has some ammunition on this point. As part of a 2011 privacy settlement with the Federal Trade Commission, Google has regular outside audits performed on its privacy practices. The most recent audit, conducted by Ernst & Young, covered April 2016 to April 2018 — and Google passed that audit.
We don’t know whether Google disclosed its breach to E&Y. (Much of the audit report is redacted.) Regardless, the audit raises an important but difficult point: different groups have different perceptions about what “privacy,” “breach,” and “disclosure” should actually mean. And the loudest group in that conversation — consumers — happens to be the one with the least nuanced, most visceral views on the subject.
Different groups have different perceptions about what “privacy,” “breach,” and “disclosure” should mean. And the loudest group — consumers — happens to be the one with the least nuanced, most visceral views on the subject.
For example, imagine you lend me your car. I drive it to the local mall, and by accident I leave the car unlocked while I go shopping. When I return to the car, I discover that I left your car unsecured — but I don’t know whether anyone entered the car, rifled around, and perhaps lifted your home address from the registration.
If I told you about my mistake, what would you do with that information anyway? Move to a new house? Get a new car? Those responses are disproportionate to the risk, which probably is zero. Nobody can prove someone entered your car. So why bother telling you about the mistake at all?
Still, withholding that fact from you feels wrong, doesn’t it? Many people would interpret that silence as a betrayal of trust. That’s the predicament Google faces now. This is one instance where the difference between being right and doing right is enormous. Compliance officers and other business ethics leaders at your organization might want to contemplate that point, before something similar happens to you.
Meanwhile, in CISO Land…
We’d be remiss if we didn’t wonder what IT security executives think of this whole mess, too. You can’t do better than to check the opinion of Alex Stamos, former chief information security officer at Facebook.
Stamos compares data breaches to aviation security: in the event of a plane crash or even a near-miss, all parties involved convene to discuss how the accident came to pass, and then distribute updated policies and procedures to the whole industry. IT security, Stamos said on Twitter earlier this week, needs something similar.
This might require legislation to encourage more honesty about self-discovered flaws, narrowly averted privacy disasters, and breaches that were stopped before PII access mandates disclosure. We are only hearing about 10% of the action, which makes it impossible to learn. (5/6)
— Alex Stamos (@alexstamos) October 9, 2018
Stamos isn’t wrong. Aviation security has improved tremendously in the last 20 years thanks to its open-minded discussions of how accidents happen. Applying those same principles to IT security would be a great move.
That will only work, however, when people trust companies with their personal data. Even Google, in its discussion of the breach, starts by saying, “Users can grant access to their Profile data.” Users grant access to their data — as in, it belongs to them, not Google or anyone else.
That’s the ethical tension here that’s causing so much grief. I’m not sure how keeping quiet about a near-breach eases that tension, even when the law says you can.