Are New Vulnerabilities A Tipping Point in 2019?

Are New Vulnerabilities A Tipping Point in 2019?.jpeg
Software failures are failures of understanding, and of imagination.
— James Somers, The Atlantic
Security will always be exactly as bad as it can possibly be while allowing everything to still function.
— Nat Howard, at USENIX 2000

Why Do We Have So Many Vulnerabilities?

The reasons are likely several in number. In 2014, speaking at the Black Hat keynote address, Cybersecurity as Realpolitik (full text is here), Dan Geer posited that there were two choices regarding ever-increasing vulnerabilities. 

The less palatable option was that we as professional security professionals are doomed. Because of increasing complexity and lax oversight, the number of vulnerabilities would forever increase, just like the inevitable expansion of the universe. At some point, recovery would be impossible. 

The alternate option was that we were making progress: most software companies have vulnerability management programs, and many commercial businesses understand the value of frequent patching. Dan also proposed policies that would help stem the ever-increasing irresponsible behavior of those releasing ill-formed code. 

As an aside, in the last year, California has adopted the "right to be forgotten" and other privacy rights in the landmark California Consumer Privacy Act, set to take effect in 2020, which would make companies financially responsible for not having tight control over consumer data. Because of the penalties, any medium-sized and larger for-profit company doing business in California would be affected, and would need to have good coding practices that allow for fine control of consumer data.

The theme of the vulnerability onslaught continues, at least insofar as Black Hat and other key conferences are concerned. At Black Hat 2018, Jeff (aka Dark Tangent) Moss's introduction to the keynote speaker, Parisa Tabriz (aka Security Princess), attempted to address this problem by highlighting the work some of the major manufacturers had done. 

One example presented by Patricia was the work Google had done to make https the default for all web browsing connections, across major web browsers. She also emphasized the fact that for many companies attempting to limit vulnerabilities, it is still a game of whack-a-mole. Therefore, we should look for fixes that broadly impact security, not one-offs and small fixes that only address specific use cases.

This theme was repeated, in economic terms. 

At a holiday gathering of eight or so local chapters of various professional security organizations in downtown L.A., Malcolm Hawkins of Cylance observed that the onslaught of vulnerabilities is what keeps us employed. We profit from insecurity.

Instead of incremental change, we need transformational change, which Hawkins suggested would be mostly effective, automated responses to most attacks. Hawkins quoted a statistic from Piper Jaffray: There was a 80.9% positive correlation between breach activity and security industry growth. Information security is an economic inefficiency, at least as currently practiced. Cybersecurity stocks boom after a big publicized ransomware attack. 

So Much Complexity, So Much Abstraction

As computers progressed from large mainframes to larger mainframes to mini computers to microcomputers to pocket computers — mobile phones, tablets, and so on — and became ubiquitous, complexity increased.

Ground Control, Missile and Rocket Software Systems Size Growth from 1960 to 1995. The trend from 1995 to present is similar. Image Source: ResearchGate

So Many Lines of Code, and Increasing

Code use — not just size — has increased because of multiple reasons: many embedded controllers, and many subcomponents, have their own micro-controllers and subsystems,  An example of this that was given at DEF CON 24 by Dan Kaminsky was the modern hard drive (even more so with common use of SSDs). 

Lines of code from various Operating Systems; from an old slide from a MIT Computer Engineering course. Image Source

Operating Systems from older to newer, with Lines of Code where available, and size of a common (CIS) benchmark used to harden the various operating systems. Image Source: Doug Mechaber

How to Measure Complexity

The traditional measure of complexity has been lines of code (LOC), but this is at best an approximation; this does not take into account language differences or framework variation, never mind individual coding conventions and density. There are variations in LOC that attempt to measure some of these variations, by including or excluding comment lines.

There are some broad recommendations [Note: link opens a PDF] for how common programs should be structured (which is frequently violated, leading to increased maintenance costs and buggier code):

  • File length should be 4-400 program lines.

  • The smallest entity that may reasonably occupy a whole source file is a function, and the minimum length of a function is 4 lines.

  • Files longer than 400 program lines (10-40 functions) are usually too long to be understood as a whole.

Intended to be independent of language and language format, the McCabe Cyclomatic Complexity was introduced by Thomas McCabe in 1976. McCabe Complexity measures the number of linearly independent paths through a program module (Control Flow), and is one of the more widely accepted measures of complexity. McCabe Complexity is a broad measure of soundness and confidence in a program; it's based on counting the number of conditional branches, versus a purely linear program. A greater McCabe number means there are more execution paths through the function and, therefore, the harder the program is to understand or to maintain.

Other measures include Halstead metrics, one of the oldest, strongest measures of code complexity. Halstead metrics [Note: link opens a PDF] are often used as a maintenance metric, and includes such metrics as number of bugs, difficulty level, error proneness, effort to implement, program level, program length, number of operators, number of operands, vocabulary size, etc. By interpreting the source code as a sequence of tokens and classifying each token as an operator or an operand, Halstead's metrics are able to come up with formulas for estimating various complexity concerns, such as time to implement, number of bugs, and so on.  

One example of a Halstead metric: Effort to implement (E)

The effort to implement (E) or understand a program is proportional to the volume and to the difficulty level of the program.

E = V * D

Then there is the Maintainability Index (MI), which is a combination of LOC, Halstead, and McCabe measure.

Measuring Complexity in Commercial Programs

Complexity is hard to measure for many canned programs, especially operating systems, as companies like Microsoft guard much of their software make-up. Microsoft has published some new LOC counts, but only for a few necessary components of their OS.  Since it is stripped beyond any usefulness, it doesn't tell us what one would expect in a typical server deployment. 

For that reason — variability between deployments — Microsoft doesn't publish registry size, another measure of total complexity. Though Linux generally does publish LOC counts, one has to make sure that each LOC  for different versions has the same kernel pieces and other software. 

Are We Doomed?

As security professionals, we have many conditions that increase the chance of a vulnerability that escapes initial attention:

  • Software complexity is ever increasing, mostly because of feature demand and competition

  • Use of an increasing number of external library routines

  • Demand from business that the software be released in a timely fashion, and that security not hinder new product features

  • Lack of budget for maintenance, and scant attention from business because maintenance isn't profitable or eye catching

  • More resources available to large hacker groups versus the resources available to vet your business's code 

Though operating systems have generally become more reliable, and vulnerabilities are patched very quickly, increasing features open up much more interaction with third-party software providers. This ecosystem seems to be getting more complex and harder to fix, not easier. Could the straw man pessimist proposed by Dan Geer be the correct answer, that secure software is doomed?  

Ever-increasing software, frequently making use of external component and lower level parts of increasing complexity necessitates a sea-level change in how we approach software’s release and distribution. This was the transformational change talked about by Hawkins, Moss, and Tabriz that we need in order to overcome Geer's pessimistic view, a security apocalypse, and instead reach for Geer's optimistic view, that of a (mostly) secure ecosystem.

The greatest danger for most of us is not that our aim is too high and we miss it, but that it is too low and we reach it.
— Likely misattributed to Michelangelo

About Douglas Mechaber

Doug Mechaber has been writing about security, IT, and gadgets for twenty years in such places as Tom's and Network Computing. When not designing or implementing proper security architecture or studying for a certification exam, Doug teaches Scuba for LA County, the first recreational dive certification agency in the world, verifies PATONs for the USCG through the Auxiliary, and is trying to find the time to finish making his first violin.

More About Douglas