Twitter, Part II: Security Control Failures
Today we return to that whistleblower complaint against Twitter announced to the world last week. The complaint contained all sorts of allegations about poor cybersecurity and privacy governance — so what were those allegations, exactly; and what lessons can other compliance and audit professionals learn here?
As you might recall from our previous post, the whistleblower is Peiter Zatko, Twitter’s former head of security and a legendary voice in the cybersecurity world who goes by the nickname “Mudge.” Mudge ran cybersecurity for Twitter from November 2020 until he was fired in January of this year. He subsequently filed an 84-page whistleblower complaint in July with the Securities and Exchange Commission, the Justice Department, the Federal Trade Commission, and Congress.
One good place to start our analysis: on Page 19 of the complaint, where Mudge says that numerous internal control weaknesses have left Twitter out of compliance with a Federal Trade Commission consent decree for years.
The consent decree itself traces back to 2011, when Twitter settled charges that it had failed to protect users’ personally identifiable information in the late 2000s. Specifically, Twitter had given far too many employees administrator control (“God mode,” the company called it) over user data — so that if an attacker somehow obtained employee log-in credentials, he or she would have unfettered access to all that data. The consent decree directed Twitter to establish and maintain a comprehensive information security program, which would be assessed by an independent auditor every other year for 10 years.
According to Mudge, that comprehensive information security program never took flight. Example: in 2020, a Florida teenager and several of his friends hacked into the Twitter accounts of Barack Obama, Joe Biden, Jeff Bezos, Elon Musk, and dozens of other high-profile users. How did the attack happen? We’ll let Mudge speak for himself:
In fact, it was pretty simple: Pretending to be Twitter IT support, the teenage hackers simply called some Twitter employees and asked them for their passwords. A few employees were duped and complied and — given systemic flaws in Twitter’s access controls — those credentials were enough to achieve God Mode, where the teenagers could imposter-tweet from any account they wanted.
Twitter’s solution to this attack, Mudge says, was to impose a system-wide access shutdown for all employees that lasted for several days.
Then came a second FTC headache. In July 2020 the agency drafted another complaint against Twitter, accusing the company of violating that 2011 consent decree. From 2013 to 2019, the FTC said, Twitter misused users’ phone numbers and email addresses for marketing purposes, even when those users had provided their data for security purposes only. In May of this year Twitter settled that case with a $150 million fine.
At the time of the hack and the FTC’s second complaint, Mudge says, Twitter had no CISO or comparable senior-level executive in charge of cybersecurity. Only after those incidents did then-CEO Jack Dorsey hire someone to fill that role: Mudge himself.
So what can compliance and audit executives learn here?
Numerous Internal Control Failures
First, we should stress that Mudge’s allegations against Twitter are still only allegations. The company has made no formal rebuttal against them, but we don’t yet know whether the picture he paints is accurate.
At an abstract level, however, we can still ask: What sort of internal control failures would lead to the mess Mudge describes?
First, a failure to define appropriate roles and responsibilities. According to Mudge, the company had no CISO during those critical times in the late 2010s and into 2020, until he was hired to fill the role. Even then, his complaint says CEO Parag Agrawal and other senior executives meddled in his attempts to assess Twitter’s security shortcomings and discuss those issues with the board.
This is all the more relevant because the SEC is poised to adopt rules later this fall for expanded disclosure of cybersecurity risks. Among the proposed requirements: that companies disclose who their CISO is and how that person reports to the board (if at all); and whether the company has any in-house committee that measures and manages cybersecurity risk.
If those rules come to pass, then an arrangement like Twitter’s lack of a CISO would be a glaring red flag to investors. Or if a company did say its CISO worked closely with the board, and then faced allegations like what Mudge is saying — that would leave the company terribly exposed to an SEC enforcement claim.
Second, a failure to map and control sensitive personal data. Go back to that allegation about Twitter taking personally identifiable information users had provided for security purposes only, and using that PII for marketing purposes. That means Twitter had insufficient policies, procedures, and controls to identify the PII in its possession and segregate that data according to users’ consent wishes.
These are failures of data classification (“the PII from these users is free for marketing; the PII for these other users we can’t”) and recordkeeping, because you’d need to have the consent records from those users amenable to their PII being used for advertising.
Compliance and IT teams could work together to develop the necessary processes for data classification and consent, and this certainly seems like one of those daunting challenges where a GRC tool would help.
Also, however: Wasn’t an outside auditor assessing Twitter’s security controls during this period, per that first FTC settlement? Why did nobody catch this or assure that it was resolved in a timely manner?
Third, a failure to govern employee access rights. Now go back to the allegation about far too many employees operating in “God mode.” Such a loose approach to data access is a disaster waiting to happen — and so it did, in that 2020 attack that took over high-profile users such as Obama, Biden, Bezos, and others.
The appropriate control here is a process to define employee roles, responsibilities, and data access needs clearly, and to follow the Principle of Least Privilege (employees only get access to the data necessary for their jobs, and no more) while doing so. Then you need strong provisioning and de-provisioning controls and procedures, to assure that employee access rights keep pace as they move through various roles in the enterprise — including no rights at all once they leave.
Again, this is a meat-and-potatoes issue for cybersecurity that could be audited at any large organization; so one wonders whether any security audits at Twitter did flag this issue, and why it wasn’t resolved in a more timely fashion.
From Numerous to Compounding Failures
Compliance and audit professionals should also notice how smaller control failures can blend together into a single, worse disaster.
For example, go back to that 2020 attack that hijacked high-profile accounts. First we had poor access controls and cybersecurity training, which led Twitter employees to fall for a phishing attack from the Florida teen. That allowed the attackers into the network, posing as employees.
Then, because Twitter also had poor governance over employees’ data access, those attackers quickly achieved God mode. That allowed them to take over the high-profile accounts. In that specific instance, the attackers only used the high-profile accounts to tell people to send bitcoin payments to anonymous accounts — but they could have used those hijacked accounts for far worse purposes, such as tanking the stock market or inciting violence. (Does anyone doubt that if an attacker took over Donald Trump’s account and called for violence in the streets, his worshippers wouldn’t do exactly that?)
That’s the deeper issue here: that numerous small control failures can be exploited strategically to cause disaster — either for the company itself, or innocent parties. Hence the urgency for getting cybersecurity right, and doing so right from the start.