ArchiveCyberOODA Original

Vulnerabilities, the Search for Buried Treasure, and the US Government

This OODA Network Member Only content has been unlocked for unrestricted viewing by The Future Proof Conference through the OODA Unlocked program which lets community members promote thought leadership to a broader global audience.






Most weeks, it is far outside the normal job responsibilities for cybersecurity professionals to understand what the United States (or other governments) do to find or use computer vulnerabilities. Just stay patched and keep the board of directors happy. This is not one of those weeks.

This week we learned that the National Security Agency disclosed to Microsoft that it had discovered a major vulnerability (dubbed CVE-2020-0601) in Windows 10. A Washington Post article, by veteran cyber journalist Ellen Nakashima, declared this to be a “a major shift in the NSA’s approach, choosing to put computer security ahead of building up its arsenal of hacking tools that allow the agency to spy on adversaries’ networks.”

This unique story puts the spotlight on vulnerabilities and the U.S. government process for determining whether to disclose or retain the vulnerability.

But before you start reading, go patch CVE-2020-0601. We’ll wait…

Vulnerabilities as Buried Treasure

Okay, now that you’re patched, keep in mind this is nothing new. Computer software and hardware have always had vulnerabilities, and some of these can critically undermine the security of the system. As far back as 1970 the Ware Report described “the major types of vulnerabilities,” of “accidental disclosure,” “deliberate penetration,” and “physical attack.” Vendors, governments, well-intentioned security researchers, and criminal hackers all try to find these vulnerabilities first, for different reasons.

Hackers, whether they are soldiers or spies or criminals, rarely “create” new vulnerabilities. Rather, with clever techniques, special software, and their own wits and intuition, they sniff out vulnerabilities that already exist in the hidden and complex depths of the IT systems surrounding us. Modern software and hardware is complex, and even the most sophisticated vendors cannot find and fix all of the bugs before they start selling their products. New vulnerabilities, in the thousands and millions, are being created every day.

Beyond this generality, no one knows exactly how many vulnerabilities there are, or even whether they are, as Bruce Schneier has written, somewhat sparse or incredibly dense. Cybersecurity expert Dan Geer summarized Schneier’s conundrum this way:

Are vulnerabilities in software dense or sparse?  If they are sparse, then every one you find and fix meaningfully lowers the number of avenues of attack that are extant. If they are dense, then finding and fixing one more is essentially irrelevant to security and a waste of the resources spent finding it. Six-take-away-one is a 15% improvement.  Six-thousand-take-away-one has no detectable value.

Though there is no definitive answer on whether bugs are dense or sparse – and we will return to this topic shortly – there are definitely clues about where they might be found.

To find vulnerabilities, governments, security researchers, and hackers must become cyber treasure hunters, digital gold miners. They must experiment, throwing different and unexpected inputs at the system to see how they might get it to panic and give them access. They push and pull the code, throwing it into novel situations the programmer never anticipated, all to get the system to deliver the treasure: a vulnerability that will allow them to gain access over the system. These are the most valuable of all bugs, known as “zero-day vulnerabilities.” Because the vendor doesn’t know of their existence, they cannot fix them. So anyone holding zero-day vulnerabilities has a very potent, but very secret, capability. Every use of an exploit depending on a zero-day vulnerability increases the chances it will be detected by defenders. Once discovered, the vulnerability may get quickly patched, spoiling its value.

What they do with the treasure once they’ve gotten it is up to them. For some it is an easy decision: Criminal hackers will write an exploit to “weaponize” the vulnerability and use it for profit, while vendors will patch the software bug before it gets used. Security researchers might sell the vulnerability to specialist providers, such as Zerodium, the best known “exploit acquisition platform”, or submit the vulnerability software vendor’s “bug bounty” program, which can offer major payouts, to hackers so vendors get the opportunity to fix the bug. Hackers may also keep the vulnerability for themselves, to improve their “red team” services provided to banks and other companies to improve their defense.

Governments face a tougher choice and are often conflicted when they find vulnerabilities. The more defensive-minded agencies want to disclose the vulnerability to vendors, so it can be patched to improve the nation’s cybersecurity. In the United States, the Departments of Treasury, Commerce, and Homeland Security might take this position. Others, more on national security, such as Defense or the intelligence community, are more likely to want to retain the vulnerability for intelligence or military purposes.

The US Vulnerability Equities Program

The US government has used vulnerabilities and their associated exploits offensively since at least the 1990s, but until 2010, there was no process for sharing this knowledge between agencies, or for working out the various equities between offensive and defensive mandates.

During the early 2000s, the NSA developed a strong internal process based on intelligence gain and loss tradeoffs, but there was no formal external or government-wide involvement. The decision to retain or disclose a particular vulnerability lay directly with the director of the NSA. In the earlier days of signals intelligence, US adversaries like the Soviet Union used their own communications technologies, so there were no significant trade-offs involved. Breaking them for intelligence purposes did not put US companies or infrastructure at risk. That all changed with the internet and advent of a (mostly) borderless cyberspace with everyone using similar or identical technologies. A new process was needed.

The modern process began in 2008 when President George W. Bush ordered, in the Comprehensive National Cybersecurity Initiative, that the US government develop a “joint plan” for dealing with offensive cyber capabilities and specifically called for a “Vulnerabilities Equities Process.” This led in 2010 to the promulgation of a formal policy by the Office of the Director of National Intelligence.

The VEP was reinvigorated in 2014 by the administration of President Barack Obama. Admiral Mike Rogers, head of both NSA and US Cyber Command, testified that year that the default position of the process was to “disclose vulnerabilities in products and systems used by the US and its allies,” and in fact the “NSA has always employed this principle.” That is, the process has always supposed to prioritize the defense rather than keeping more offensive tools in the government arsenal. Despite the hawkish cyber position of the Trump administration – and its secretive cyber war policy, NSPD-13 – its VEP policy has been even more clear about giving priority to defense over offense:

The primary focus of this policy is to prioritize the public’s interest in cybersecurity and to protect core Internet infrastructure, information systems, critical infrastructure systems, and the U.S. economy through the disclosure of vulnerabilities discovered by the USG, absent a demonstrable, overriding interest in the use of the vulnerability for lawful intelligence, law enforcement, or national security purposes.

NSA revealing this vulnerability to Microsoft then seems less than a “major shift” than a continuation of the policy of the last decade. Perhaps if NSA hasn’t been sharing lately, and their actions drafted away from the stated policy, it might be a “doubling down.”

Students under my supervision at Columbia University’s School of International and Public Affairs conducted research on the VEP. The 2016 report from that effort fully documents the VEP process, how it changed over time, and the likely number of total zero-day vulnerabilities in the US government arsenal (spoiler: less than you probably think). But that report missed two key points. It did not fully discuss the policy goals of the VEP nor did it address the implications of whether bugs are dense or sparse.

Assessing the VEP

The VEP balances four major priorities. The first and most obvious goal is to hopefully improve US cybersecurity by disclosing major zero-day vulnerabilities to vendors so they can be fixed. The default of the US VEP (along with the newer process in the United Kingdom) is to disclose. Second, the VEP is a risk-management process, to review all major vulnerabilities so a small number can be used by America’s spies and soldiers with the least negative impact on US cybersecurity.

The VEP, third, provides trust inside the US government through interagency coordination, so that departments other than the Department of Defense and intelligence agencies feel they have a voice in the decision. For example, Departments of Treasury and Homeland Security typically want to patch as many vulnerabilities as possible. The VEP allows this critical interagency coordination. Indeed, the VEP is one of few policy levers the White House can adjust to directly favor cyber offense or defense.

Lastly, the VEP enables external trust. Other nations and the IT companies themselves need to know the US government is being adult about vulnerabilities, that there is a process for making decisions about vulnerabilities in the technology that underpin our society and economy. VEP provides some assurance.

The first two priorities of VEP are the most important, as they are at the center of the tug-of-war between those wanting to disclose and those pushing to retain. If the same bugs are routinely found by others—known as a high “collision rate”—then each bug disclosed to a vendor is one taken directly out of some adversary’s arsenal. It would be the perhaps sole example in warfare where disarming yourself also disarms your foes, as Harvard international relations scholar Joe Nye has suggested.

But if there are so many bugs that most are never discovered by anyone else, that is, if there is a low collision rate, then disclosing vulnerabilities to vendors actually adds little extra defense. Adversaries get a discount on attempts to disarm them. Adversaries are likely to have their own set of zero-days that are mostly different to one’s own. There would be little if any disarmament value. The actual collision rate is hard to determine, as sophisticated hacking organizations must be secretive about their arsenals. But the most complete public study, by Lily Ablon and Andy Bogart of RAND, found only 5.7% of zero-days had been discovered by others after one year. Even the higher estimates, I’ve been told anecdotally, are only ~35%. In short, according to Dave Aitel and Matt Tait, two of the most hands-on experts in this space, “there is no clear evidence that Russian and Chinese operational zero days overlap” with those of the United States.

This suggests that if a government discloses say thirty significant zero-day vulnerabilities, they lose the offensive potential by the full thirty but reduce the adversary’s arsenal by at best ten.

This math means the VEP may provide less actual security than proponents of disclosure expect. But this does not detract from the remaining two goals, which are not about security but trust. Digital technologies underpin nearly all of modern society and economy. Cyberspace may be the most transformative technology since movable type. Decisions about critical vulnerabilities have become political decisions and should not be handled only behind the locked vault doors of US Cyber Command and the National Security Agency.

As much as the national security of the United States depends on military and intelligence community access to zero-day vulnerabilities, it depends perhaps as least as much on partnership with Silicon Valley and US allies and the trust of US citizens to do the right thing. The VEP, even if it buys little security value, is still an important process in the digital age.

Implications for Cyber Defenders

While this may also seem remote to enterprise technologists, there are very practical implications. After all, the exploit used in both WannaCry and NotPetya – the most damaging and disruptive attacks ever – was ETERNAL BLUE, stolen somehow by the Shadow Brokers.

Despite having kept this intensely dangerous exploit, it turns out the NSA retains very few vulnerabilities. In the fall of 2015, the NSA asserted that historically it “disclosed 91 percent of vulnerabilities discovered in products that have gone through our internal review process and that are made or used in the United States.” The remaining 9 percent were “either fixed by vendors before we notified them or not disclosed for national security reasons.”

NSA, in general, has the back of U.S. cyber defenders, at least when it comes to finding and disclosing vulnerabilities. Enterprise defenders must keep themselves fully patched to make the most of this powerful ally. The only enterprises which directly suffered from WannaCry and NotPetya were those who hadn’t applied the Microsoft patch (once NSA realized it had been stolen and disclosed it to them).

The White House’s VEP is a very reasonable process, rooted in sensible criteria of when to disclose a vulnerability or retain it.

The best process and criteria cannot help, of course, when the U.S. government cannot keep its own secrets. When NSA and CIA exploits get stolen or leaked – as with Vault7, Snowden, and the Shadow Brokers – the U.S. government deserves its share of the blame when those exploits are then used against enterprise defenders.

In 2020 and beyond, the government may decide to retain a great many more vulnerabilities, much lower than the 91% asserted by NSA. The Trump administration has taken a much harder line against adversaries, in particular through its national security strategy to dive into the new era great power competition. Building a larger military to be more assertive with Iran, North Korea, China, Russia (as well as terrorists), might easily translate into keeping more vulnerabilities. This may make Americans safer overall, giving more cyber capabilities to U.S. spies and military, but with a higher chance of those being used against enterprise defenders.

NSA deserves credit for disclosing this new, very dangerous vulnerability. That is wasted effort if you haven’t patched CVE-2020-0601 yet so please go now lest you waste taxpayer dollars – and have to answer to your board when less friendly cyber criminals and spies weaponize it.

Jason Healey

Jason Healey

Jason Healey is Senior Research Scholar at Columbia University’s School of International and Public Affairs. Previously, he was an US Air Force officer and plankholder of the first joint cyber command, the JTF-CND in 1999, and director for cyber infrastructure protection at the White House. He created the first CERT at Goldman Sachs and was vice chair of the FS-ISAC.