Digg this story   Add to del.icio.us  
Resurrecting the Killfile
Oliver Day, 2009-02-04

In William Gibson's Idoru, one of the book's hackers describes a community of people who all share a file of unwanted things to create the walled city of Hak Nam. "They made something like a killfile of everything, everything they didn't like, and they turned that inside out," he wrote.

As a prognosticator of many things related to the Internet, Gibson has always been eerily accurate. The history of Internet presents some early forms of the killfile to combat unwanted content. In the heyday of Usenet, killfiles were a reaction to trolls. Articles that matched an entry in the killfile were not even presented for reading. This allowed the user to be shielded from the parts of Usenet that he or she knew would be undesirable.

Usenet doesn't see much action today, but killfiles are still alive and well. Most modern killfiles still exist in applications that block specific hosts or messages. Application users subscribe to shared killfiles as a way to fight back against advertising, malware, and other unwanted traffic. Blocking at the application level is not entirely efficient, though. Parsing messages or markup languages to find instances of host names requires a decent amount of processing power.

This is a solution I've seen used in small communities around the Internet. Not application-based killfiles, but diving down through the network stack and blocking things at a lower level using host files. The host file is the first file that applications query when looking for an address on the network. Each of the hosts considered as unwanted guests can be given an entry in the host file pointing to 127.0.0.1, the default loopback address, effectively blocking them.

Yet, host files are rarely used these days, even though they are part of every major operating system. Perhaps it's time for their return.

In the early days of the Internet, when it was ARPANet, there were so few nodes that it was possible to manage the host file as a sort of phonebook. Today, few would suggest manipulating a host file since it would quickly be out of date and is considered completely redundant in light of DNS. However, operating systems are still designed to check with the host file first before making a network request. This fact was noticed by various programmers and utilized for both good and evil.

Malware writers in particular started using it heavily to block all communications with antivirus and patch servers. Others used it as a way to give servers nicknames that are frequently used.

The host file on my day-to-day laptop is now over 16,000 lines long. Accessing the Internet — particularly browsing the Web — is actually faster now. The only real complaint is that ad spaces are replaced by very ugly "cannot connect to server" messages. This could be easily fixed with a localhost Web server that pushes small text or images, but a part of me does want to support certain forms of advertising to pay for the salaries of those who create the content.

From what I have seen in my research, major efforts to share lists of unwanted hosts began gaining serious momentum earlier this decade. The most popular appear to have started as a means to block advertising and as a way to avoid being tracked by sites that use cookies to gather data on the user across Web properties. More recently, projects like Spybot Search and Destroy offer lists of known malicious servers to add a layer of defense against trojans and other forms of malware.

Shared host files could be beneficial for other groups as well. Human rights groups have sought after block resistant technologies for quite some time. The GoDaddy debacle with NMap creator Fyodor (corrected) showed a particularly vicious blocking mechanism using DNS registrars. Once a registrar pulls a website from its records, the world ceases to have an effective way to find it. Shared host files could provide a DNS-proof method of reaching sites, not to mention removing an additional vector of detection if anyone were trying to monitor the use of subversive sites. One of the known weaknesses of the Tor system, for example, is direct DNS requests by applications not configured to route such requests through Tor's network.

The primary danger is that, through inattentive management, the blacklist becomes a form of censorship. By and large, I am against the idea of blacklists because they are often not fast enough to purge non-malicious actors from their databases.

This underscores the downside of blacklists. Responding to quick changes in blacklists can be a painful process if left unmanaged. The host file does seem to be the perfect place for whitelisting, though, and I would strongly encourage those technical enough to start using for that.

CORRECTION: The original column had attributed the creation of a different security application to Fyodor. He is the creator of the NMap network scanning tool.



Oliver Day is a researcher at the Berkman Center for Internet and Society where he is focused on the StopBadware project. He was formerly a security consultant at @stake and eEye Digital Security. He has also been a staunch advocate of the disclosure process and providing shielding for security researchers.
    Digg this story   Add to del.icio.us  
Comments Mode:
Resurrecting the Killfile 2009-02-04
Robert Shepard
Resurrecting the Killfile 2009-02-04
Mark (1 replies)
Resurrecting the Killfile 2009-02-04
Anonymous
Resurrecting the Killfile 2009-02-04
Anonymous
Resurrecting the Killfile 2009-02-04
Anonymous
Resurrecting the Killfile 2009-02-05
Anonymous
Resurrecting the Killfile 2009-02-06
Iturbide


 

Privacy Statement
Copyright 2010, SecurityFocus