By Anand Paturi
Managing cyber threats has never been more challenging. Breaches, hacks and malware are daily problems for organizations. Not only are threats growing in frequency and severity, they are becoming more sophisticated, making it extremely difficult for organizations to identify and stop them, and to determine which ones need to be prioritized for remediation.
While there are many proven ways to contain and eliminate vulnerabilities across networks, the ability to determine which ones need immediate attention remains a stumbling block. This is a major weakness in today’s IT’s defenses.
The challenge is not just to understand the severity of vulnerabilities and assess the organization’s exposure, but to stay current on the changing threat landscape. To keep up to date, many organizations rely on the National Vulnerability Database (NVD).
What Is the NVD?
Managed by the National Institute of Standards and Technology (NIST) Computer Security Division, the NVD is a leading source of intelligence on vulnerabilities that have been collected and analyzed. While the NVD provides many excellent benefits, it has some limitations.
The NVD provides detailed information on vulnerabilities, including a risk score for each one which is based on the Common Vulnerability Scoring System (CVSS), and offers advice that assists in remediating threats.
The agency publishes new vulnerabilities daily, delivers a comprehensive summary of the weaknesses, and presents an informative and easy-to-read dashboard.
While the NVD is a valuable source of information on vulnerabilities, it’s struggling to keep pace with the constant changes posed by cyber threats. One example is the thousands of vulnerabilities whose status has changed since they were first recorded. More than 60 percent of the listed Common Vulnerabilities and Exposures (CVEs) have been modified since their initial reporting in the NVD.
Given the sheer volume of threats that organizations must remediate, prioritization is often given to vulnerabilities based on their CVSS scores. However, history has shown that even low-risk threats can easily transform into critical status overnight, forcing organizations to scramble to find solutions rapidly.
A second limitation is the latency of disclosure time for a vulnerability — the delay between a vendor disclosing it and the NVD publishing details of it. By relying on only one source, such as the NVD, organizations expose themselves to unknown risk.
An example of this is the Apache Struts vulnerability, which affected organizations around the world last year and allowed attackers to execute arbitrary commands via a crafted Content-Type, Content-Disposition, or Content-Length HTTP header.
While the vendor disclosed the exploit and released a patch on March 6, an exploit was released only three days later. It was discovered by RiskSense and accepted by NIST on March 10. Unfortunately, it took four days to publish this in the NVD.
Another issue is that the information provided by the NVD typically lacks the local context that organizations need to determine which assets are at risk to a particular vulnerability and need immediate attention. As organizations expand their IT footprint with cloud, mobile and IoT technologies, they need deeper visibility into which systems are affected in order to address risks in a timely fashion.
Best Practices for Successful Mitigation
Given the growing rate of new vulnerabilities constantly being added to the NVD and those that are undetected in the wild, organizations have their work cut out to create a successful mitigation strategy.
The following steps can help avoid the blind spots associated with relying on the NVD as the primary source of intelligence for vulnerability management, prioritization and remediation.
1) Focus on remote exploits and related vulnerabilities
Scanners sometimes fail to detect remote code execution (RCE)-based exploits, which can give attackers the ability to execute malicious code and take complete control of an affected system with the privileges of the user running the application. The simplest way to protect against such exploits is to fix holes that allow attackers to gain access to the network and/or systems, which is, of course, easier said than done.
2) Adapt a threat- and risk-centric approach
This model incorporates both an organization’s critical assets and historical weaponization patterns of vulnerabilities to predict those that pose the greatest risk of being exploited. The core idea is based on using a multidimensional view of vulnerabilities to identify those threats that are most likely to be used in attack and are present in the IT environment.
Rather than rely on a valuable but limited sample of data like the NVD, a threat- and risk-centric approach can quickly and accurately identify vulnerabilities that are, or are very likely to be, weaponized, help determine which systems would be compromised by an attack, and prioritize remediation resources accordingly.
About Anand Paturi
Anand Paturi is Senior Research Scientist at RiskSense, a provider of cybersecurity risk management technology. He is an expert in Web application and database security, risk exposure and threat-centric vulnerability quantification, and enterprise risk analytics. He is also credited with developing one of the first cyber-risk scoring models for computing devices.