When U.S. government officials discover a new vulnerability they can use to hack into people’s computers, they have a decision to make: Should they keep it to themselves? Or should they warn the world?
Exactly how they make that decision is a mystery.
Now, two top former White House cybersecurity officials are recommending in a report that the administration be more transparent about how it deals with those vulnerabilities when it discovers them or buys tools to exploit them from the private sector.
“The principles guiding these decisions, as well as a high-level map of the process that will be used to make such decisions, can and should be public,” wrote Ari Schwartz and Robert Knake in a new report for Harvard’s Belfer Center for Science and International Affairs.
Members of the intelligence community have an obvious incentive to hold on to undiscovered cyber flaws so they can keep using them to hack their targets. But failing to tell a company about a flaw in its product — so it can be fixed — puts users at risk from other hackers.
The White House’s continued refusal to explain how it balances the priorities of intelligence versus cybersecurity for Americans is leading to a lack of public trust, the authors suggest.
In 2015, White House officials begrudgingly released heavily redacted guidelines for disclosing cyber threats, which they call the Vulnerabilities Equities Process, to the Electronic Frontier Foundation. They also issued a vague White House blog post.
But as the public becomes more aware of the government’s ability to go on the technological offensive — hacking against adversaries — consumer advocates are asking how that capability is regulated.
The FBI’s very public battle with Apple earlier this year ended when the bureau bought a software vulnerability that gave it access to the San Bernardino killer’s iPhone. But the bureau didn’t disclose any details about the vulnerability, leading to questions about whether the government’s process is really weighted towards disclosure, as officials have insisted in the past.
At a privacy conference in April, I asked FBI general counsel Jim Baker whether or not the Vulnerabilities Equities Process protects those software exploits the government doesn’t necessarily “discover,” but purchases. The redacted description of the Vulnerabilities Equities Process says that vulnerabilities “identified” though government sponsored research or purchased by the government through a third party “need not be put through the process.”
It’s a pretty big red flag suggesting anything bought won’t be disclosed.
“It’s a legitimate thing to ask about,” Baker said. “Maybe we need to do a better job of articulating that to the public, especially in light of this current discussion. Let me take that back.”
I later asked his office for an answer, but was directed to the White House. Mark Stroh, deputy spokesperson for the National Security Council, wrote me that he would “decline comment on the specifics of any alleged internal documents.”
In the Harvard report, Schwartz and Knake include specific recommendations like prohibiting the government from signing a nondisclosure agreement when it purchases an exploit from a third party.
They also recommend President Obama issue an executive order “to formalize” the process. Right now, the Vulnerabilities Equities Process is more of a general set of guidelines without much legal weight.
They suggest the government should make the process public, and should provide a system for oversight and review, as well as produce an annual report.
They also questioned the role of the NSA in decision making, because the inherent conflict between its two missions — to protect cybersecurity and gather intelligence — “throws into question whether [it] can serve as a neutral manager of the process.”