Digg this story   Add to del.icio.us  
The Scale of Security
Adam O'Donnell, 2009-07-17

Human beings do not naturally understand scale.  

While we speak of financial transactions in the hundreds of billions of dollars as being something as routine as brushing our teeth, we question the value of programs that cost in the single-digit millions and quibble with friends over dollars. Similarly, there are many problems in our industry that, when explained to an outsider, sound like they should have been solved decades ago. It is only when we relate the number of systems that need to be considered in the repair that we truly communicate the difficulty of the problem.

This is to be expected — it is irrational to expect people to understand scale without training. After all, we are not primed for the task.  We evolved in hunter-gatherer tribes consisting of less than a hundred people, and developed number systems derived from the count of immediately visible objects, namely the number of fingerers on our two hands.

If we are unable to communicate the scale of a given security issue, we are unable to communicate the actual threat we face. Problems change dramatically as you move upwards in scale, and what may be a tractable issue when considered on a case-by-case basis becomes a major issue when multiplied by a billion instances. A reason why computer security is a challenging discipline is precisely because of the scale of the issues we now face.

For example, antivirus software works pretty well on an individual level. Let's assume that the commonly available antivirus products provide 99.8 percent effectiveness against threats. Granted, this number is wildly inflated, but it is useful for the sake of argument. If a single computer user faces a novel piece of malware once a week, they would on average have to wait almost seven years before they were infected. When brought up to scale, the size of the problem is a bit more sobering. Analysts currently estimate that there are around one billion computers in the world, leaving 2 million computers vulnerable to attack at any given moment.

Even if only 10 percent of the systems were connected to the Internet and vulnerable to the infection, we would expect 200,000 new infections every week, given a 99.8-percent protection rate. That forms the basis of quite a sizable botnet.

The cost of cleanup is equally disconcerting. Heavily infected systems require a fair amount of manual labor to bring back to a clean state. If we assume it costs around $100 per system in professional services to remediate the infection, we end up with a final cost of around $20 million spread across the compromised user base.

Similar examples can be found in the massive cleanup operation that software developers have undertaken to fix unsafe code that could lead to security vulnerabilities. Changing a single call to strcpy() or strcat() to strncpy() or strncat() may take a few seconds of typing and a minute of testing to fix a small application. If we consider the number of places this had to be addressed in the billions of lines of legacy code that were already in production and deployed in numerous locations, we can understand why the repair work has taken tens of thousands of man years and a better part of a decade to complete.

Many of these issues of scale are inherent properties of distributed systems. The market decided years ago that the increased flexibility and elimination of single points of failure afforded by a large number of decentralized systems was preferable to a limited number of "mainframe" systems. We accepted a tradeoff, however, that the per-user cost of system management skyrocketed, and the corresponding cost of repairing security problems became prohibitive.

Users are turning away from distributed software and once again adopting centralized applications in the form of web-based software-as-a-service (SaaS) offerings. It will be easier for administrators to repair security issues inside these isolated walled gardens than it would be to remediate software on everyone's endpoint host. However, this does not solve the malware problem by any means, as the endpoints used to connect to these centralized services will still be compromised.

Recentralization may make it easier to provide a localized band-aide, but it does nothing to fix the continuing large-scale security issues that persist on the endpoint.

In the end, consumers of our products and services want us to tell them we can "solve" a class of security problems, be it computer viruses or code vulnerabilities. The sheer number of locations that we are required to touch to prevent or repair a security event means that we can never eliminate an issue. The best our profession can hope to do is create applications and policies that will minimize the pain until the consumer's attention, and the attacker's attention for that matter, is drawn to a new area. This isn't defeatism; this is the reality in which we must operate.



Adam J. O'Donnell, Ph.D. is the director of emerging technologies at Cloudmark, an anti-messaging abuse company. He has worked on several books, serving as the technical editor and contributor to "Building Open Source Network Security Tools," a contributing author on "Hacker's Challenge," and co-author of "Hacker's Challenge 2".
    Digg this story   Add to del.icio.us  
Comments Mode:
The Scale of Security 2009-07-18
Robert Brown
The Scale of Security 2009-07-21
Anonymous
The Scale of Security 2009-07-26
Anonymous
The Scale of Security 2009-07-29
Anonymous
The Scale of Security 2009-08-03
Ron
The Scale of Security 2009-12-05
William Fisher


 

Privacy Statement
Copyright 2010, SecurityFocus