Digg this story   Add to del.icio.us  
Code Red: it can happen here
Jon Lasser, 2001-08-15

Linux and Unix users aren't immune to Code Red-style worms. In fact, we invented them.

Right now, pretty much all of the other Unix security people I know are feeling pretty smug about this whole Code Red worm.

Everyone's making jokes about Microsoft Internet Information Server (IIS) and the clueless sorts of individuals damned to run on such an insecure platform. It's been reported that the last known serious Apache exploit was fixed in January 1997, and that the last discovered remote exploit of any sort in Apache was discovered and fixed in January 1998.

However, Microsoft users, I just want you to know that we're not laughing at you, we're laughing with you: us Unix administrators have dealt with worms before, and we'll deal with them again.

In fact, the first worm of all ran on Unix boxes: the Robert T. Morris Jr. Internet worm of 1988. Morris wrote a program that compromised VAX and Sun systems (then the two most popular platforms on the Internet) and used these systems as attack platforms to spread to more computers. He was reportedly inspired by the 1975 John Brunner novel The Shockwave Rider, which described a "tapeworm" infecting the worldwide network.

Compared to Code Red, which exploited a single hole in IIS, the Great Worm was a work of genius: it exploited three different security flaws on Unix systems. In doing so, it tended to compromise a large number of user accounts as well. The Morris worm slithered through a hole in Sendmail's 'debug' mode that allowed remote execution of code. It squeezed through a buffer overflow vulnerability in the 'finger' daemon; it abused rsh's tendency to trust particular servers.

To exploit this last hole, the worm even had to guess passwords of user accounts on the system; it was fairly successful at this.

While the Sendmail and finger holes were fixed more than a decade ago, and rsh has been deprecated by ssh, the Morris worm was far from the last Unix worm on the Internet. This year alone, we've had the sadmind worm, which attacked Solaris systems and used them to deface Web sites running on IIS on Windows systems; the li0n worm, which exploited a BIND vulnerability on Linux systems and installed a rootkit on those boxes; and the Ramen worm, which followed the great Morris tradition and attempted to exploit three different holes on some Linux systems: a wu-ftpd buffer overflow, and format string exploits in rpc.statd and LPRng.

Excepting the Morris worm, before which nobody cared much about Internet security, all of these worms have one thing in common: the exploited holes were discovered months before the worm, and official patches for the affected packages were widely available.

This applies as much to the Windows holes as to the Linux and Unix flaws: a hotfix for the IIS hole abused by Code Red was available at least as early as June. In fact, at least three months before the release of any particular Linux worm, there was already a patch available for the hole exploited by that worm. Yes, three months. If you patched your systems on a quarterly basis, you would not have been vulnerable to a single one of the Linux worms.

Of course, there's no reason to patch so infrequently, and good reason to patch more often. Although script kiddie-friendly exploits may take weeks to surface, more talented crackers who are targeting your company can create their own exploits for newly-discovered holes in days or even hours, depending on how much publicly-available information there is regarding the flaw.

It's best to patch systems as soon as the patch is announced -- after first testing it on a non-production test system. Patches are widely announced on flavor-specific mailing lists and Web sites, so there should be no lack of knowledge involved. If you're responsible for setting policy, you might want to make the daily duties of system administrators include checking relevant security sites for new patches.

Besides patching, another obvious technique to help protect against worms is the use of minimal services on production servers: BIND should be installed only on name servers, and there should be no more than a single FTP server at your site. (Two, perhaps, if one is for the outside world and one is for internal use only, but if that's the case then your firewall should block incoming connections to that second server.) LPRng needs to run only on the system with the printer attached to it, a major security improvement from earlier Unix printer implementations, and rpc.statd need run only on servers using NFS.

It's true that one of the early attractions of Linux and Unix versus Windows was the ability to run multiple services on a single system, but with the price of systems so low, the improvement to your network security by running one service per system is incredible, and will save you money pretty quickly.

Of course, there's one type of worm that Unix users don't have to worry about: our mail clients, unlike Outlook and Outlook Express, don't have a tendency to automatically run particular attachments, or to interpret scripting commands embedded in messages.

Maybe we're entitled to feel just a little smug.

SecurityFocus columnist Jon Lasser is the author of Think Unix (2000, Que), an introduction to Linux and Unix for power users. Jon has been involved with Linux and Unix since 1993 and is project coordinator for Bastille Linux, a security hardening package for various Linux distributions. He is a computer security consultant in Baltimore, MD.
    Digg this story   Add to del.icio.us  
Comments Mode:
The GreatWorm of 1988 was not the first... 2001-08-15
Adam Morris (1 replies)
Robert Tappen Morris 2001-08-15
Chris Hisgen


Privacy Statement
Copyright 2010, SecurityFocus