Jose Nazario discusses worms, 2005-08-16
Story continued from Page 1
A lot of people don't install patches, even if they fix security bugs. That is why a big company such as Microsoft is trying to make the process automatic and obligatory. Should Microsoft release patching worms to fix every vulnerable system on the Internet in a matter of minutes?
There's no need. With Windows update tools installed and automaticaly enabled on the end host, Microsoft (or any big software vendor) has that access. AV vendors have been doing this for years, both Apple and Microsoft do this at the OS level, and so on. Some people even have their IDS signatures update automatically.
When you have an agent-based system like that deployed, there's no need to deploy self propagating code (like a worm) to affect change everywhere. You can achieve the same result more quickly with agents. Instead of waiting for the worm to find all hosts, you can hit every host immediately.
I'm not sure it is the same thing. First of all because of bandwidth. When you have 100 (or maybe 500?) million Windows users that want to install a 200MB service pack, how much bandwidth should you have? And how many hours (days? weeks?) do you need to patch them all? A patching-worm could take advantage of a user's bandwidth.
Actually patch distribution is pretty efficient by now. Both Microsoft and Apple, and many of the big AV vendors, use geographically and topologically dispersed centers to distribute patches. So, it's not nearly as crushing as you may think it is from a quick "back of the envelope" calculation.
I don't see how a patching worm would be any more efficient, because bandwidth will be wasted by the worm as it tries to acquire hosts and spread. Like worms themselves, a patching worm would suffer a similar fate.
There's also the issue of time. Downloading a 200MB file means being online and vulnerable for minutes (or hours). What about an attack or a worm in this timeframe?
An efficient patch can be distributed in a matter of a hours to days. With only one exception (the Witty worm), no worm has ever been constructed and deployed that fast. The time frame between a worm's release and the disclosure of the vulnerability that the worm uses is, on average, about 4 weeks.
If Microsoft wants to make patching obligatory, what is the difference between a program running on your computer (the Update agent) and an external patching agent?
As a security administrator, you have more control over an update agent than you do an external agent. It's more aware of the host's configuration and possible conflicts because it can directly query the system's status and applications.
An agent on the host is more succeptible to subversion (i.e., falsely stating a patch or some innoculant has been installed when it hasn't) than an external agent, so I'm not advocating that it be one over the other. However, if you had only one to do I'd go with the agent on the system. While it can be a management nightmare (ie maybe tens of thousands of agents in a large enterprise deployment), it's much safer in the long run than an external agent or a patching worm.
Have you ever seen a worm that closes the hole used to compromise the system? This strategy would stop a patching-worm...
That doesn't matter, because you can find another vector to propagate the 'patch distribution worm' if you're taking that approach. So, even if the worm patches the vulnerability behind it and installs malicious software for the worm or bot author's use (as opposed to your benign patching worm which doesn't leave any unwanted access behind), you can get your patching worm to propagate.
Several early worms patched the system behind them, which had the effect of stopping other attackers from using the same hole to get in. There were still numerous security holes which allowed for other attacks to succeed, though.
However, I'm still dead set against the idea or use of a counterworm for all of the reasons I have outlined above.
Windows XP-SP2 introduced the support for the famous NX bit included in modern CPUs. Here what the vendors say:
Intel's Execute Disable Bit "can help to prevent some classes of viruses and worms that exploit buffer overrun vulnerabilities thus helping to improve the overall security of the system."
AMD's Enhanced Virus Protection "acts as a preventative measure causing the virus to be localized, short-lived, and non-contagious, eventually being flushed from system memory."
I think the NX bit can help in preventing some bugs from being exploited, but it cannot do anything against worms. The point is that a human being exploiting a vulnerability could play with the exploit and the target until he succeeds. A worm is different. Generally it doesn't auto-modify the exploit to infect the target.
So, do you think that software companies should focus on randomizing addresses and offsets instead, like OpenBSD's StackGap does?
I'm a firm proponent of layered security, and this includes low level system protection measures like the NX bit, randomized stack gaps, or the like. Some of them work very well to defeat simple attacks, and in concert they work together to defeat more complicated attackers.
While I like tools like XP SP2's nonexecutable stack, they come at price at times. Some software broke with the SP2 changes, and this is one of the reasons why people were slow to deploy it. Clearly software is a very complicated system, and so all of these interactions can't be anticipated. Some of them will be negative when you change a system, even slightly. However, I think that this approach (nonexecutable stacks) is one of the key pieces of defeating large classes of attacks.
There are a number of attacks that the nonexecutable stack leaves untouched, including format string attacks. Employing various defenses, including randomized memory layouts and gaps between items in the address space, you can start to defeat these sorts of attacks, too.
Now, bear in mind that we've seen worms brute force the offset for a buffer overflow attack in the past, so it can be done. It's slow, but it's effective. If a worm has either a small enough address space to test or a way to divine the secrets used to randomize the stack (i.e., some form of PRNG seed leak, like the remote clock), then it can brute force the offsets and propagate that way.
This can make some debugging and diagnostic tasks more difficult, so you have to make sure everything tolerates that. When you have many, many millions of lines of code and thousands of applications that people and businesses rely upon, you can't make that change lightly. OpenBSD made the changes in a controlled fashion, but they had to do it over the course of a year. It took a focused effort, and that's a much more tightly controlled system than something like Windows.
One of the neat things that was developed a few years ago at HP labs in the UK was the concept of a virus throttle, or a rate limiting enforcement for your network connections. While it broke nmap on Windows XP SP2, it prevents worms from scanning or even from trying targetted connections too aggressively. Worms can still propagate, but they do so much more slowly. Again, another security layer makes a big difference, but not it's not a total solution.
These mechanisms will stop a broad class of worms which use exploits to execute arbitrary code. But they won't stop things like password guessing attacks or trojan horses. Signed code execution helps there, but that relies on the person making the judgement to be aware of all of the ramifications of the choice, and to have the information presented accurately. This doesn't always happen, so with an avenue like this, attackers and worms will still have an avenue to propagate.
I'd like to see more of this wide-scale deployment of security protection mechanisms that have proven themselves in research and smaller scale installations like Linux or BSD. I'm happy to see Microsoft making these changes to Windows, and I expect them to continue. They've also done a decent first pass at enforcing some basic security mechanisms for end hosts with firewalls enabled initially (though the configuration isn't optimal), virus protection being demanded immediately and automatic Windows update enabled as well. These approaches combine to improve the security posture of the average end host, and I'd like to see all vendors take similar approaches.
