Computer scientists from SRI International and the SANS Institute plan to present a paper next week on a technique that correlates an attacker's preference for victims' networks as a way to prioritize additions to a blacklist.
The technique, dubbed "highly predictive blacklists," allows network owners to correlate attacks on their network with attackers' preferences for other networks. Using a system conceptually similar to Google's PageRank system, the researchers used firewall logs contributed by participants in the SANS Institute's DShield service to correlate attacker's choice of targets. By matching up the preferred victims of a known attacker, the researchers have been able to develop per-network blacklists that perform better than either massive global lists or more focused local lists, according the paper (pdf).
"Our experiments demonstrate that our Highly Predictive Blacklist algorithm consistently creates firewall filters that are exercised at much higher rates than those from conventional blacklist methods," Phillip Porras, a developer with the project and program director in SRIs Computer Science Laboratory, said in a statement sent to SecurityFocus.
Blacklisting is a common technique for blocking IP address and network blocks that are known to host malicious sites or from which attacks are emanating. When global outbreaks have occurred in the past, such as with the Slammer worm or large runs of spamming, many network administrator blocked infected servers. Because of the large proportion of attacks coming from certain countries, some network administrators have recommended blocking an entire nation's IP space.
The researchers used three steps to create their blacklists. First, they filtered out any unreliable alerts from the logs submitted by contributors, including traffic from unassigned or invalid IP addresses, from Web crawlers, and from timed-out sessions. Then, they used relevance-based rankings to prioritize attacks for each contributors, grouping together those network owners who are attacked from the same IP addresses. Finally, the system gives priorities to patterns that match known malware propagation trends, dubbed attack-pattern severity.
When the system was evaluated using 720 million log entries from October to November 2007, the researchers found that it outperformed global and local blacklists in more than 80 percent of the cases. While lists made up of global attacks that surpass a certain threshold can become large and unwieldy, highly-predictive blacklists have a much better hit rate -- blocking suspected attacks -- for smaller list sizes, the researchers found. And, unlike lists made from local firewall logs, the researcher's blacklists can proactively block attacks.
The researchers will present the paper at the USENIX Security Symposium in San Jose, Calif. next week.
If you have tips or insights on this topic, please contact SecurityFocus.
Posted by: Robert Lemos