Digg this story   Add to del.icio.us   (page 1 of 2 ) next 
Time to Take the Theoretical Seriously
Chris Wysopal, 2009-01-16

Software developers response to "theoretical" research is fundamentally broken.

By now, everyone in the security industry knows about the Rogue CA presentation that Alex Sotirov and Jacob Appelbaum gave at 25th Chaos Communications Congress. It was one of the most interesting I saw all last year, and it's a good example of why software companies continue to be vulnerable to attackers.

Sotirov and Appelbaum, on behalf of a team of researchers, detailed their attack on the SSL certificate infrastructure using MD5 collisions. The part that stuck with me, however, was not the fact that the security researchers were able to use MD5 collisions to create their own valid certificate authority and sign any web certificate they wanted, it was the fact that in December 2008 a major certificate provider, RapidSSL (owned by Verisign) was signing certificates using the MD5 algorithm at all.

The MD5 algorithm has been known to be weak for many years. The first nail in MD5’s coffin came in 2004 when a Chinese researcher presented a method for generating MD5 collisions at Crypto 2004. At the time, crypto experts were already warning Internet security companies and encryption-software developers to move away from MD5.

RSA Labs put out a technical report in August 2004 stating:

How should implementers respond to this news? There is no need to panic, since it will likely be some time before the weak hash functions can be turned into practical exploits. However, applications using one of the legacy hash functions described as vulnerable should upgrade as soon as possible to the NIST — approved SHA1 or SHA2 family of algorithms.

Three years later, a Dutch researcher built on this work and detailed significantly more efficient MD5 collisions using a chosen-prefix attack. Still, certificate authorities went on using MD5.

When did they stop? They stopped right after the Rogue CA presentation. Theory, it seems, is good enough to get attackers to build attack tools, but not good enough to get software vendors and service providers to make their software more secure.

The model of waiting to fix security problems until an attack is in the wild or for proof-of-concept code is published guarantees a window of vulnerability in the information systems that we are supposed to trust and use. It banks on the hope that a security researcher will privately send the vendor proof-of-concept code before a cracker puts the theory to work in an attack tool.

Yet, the Rogue CA presentation — originally titled Making the Theoretical Possible — showed, once again, that vendors need to act quickly on "theoretical" research. The hope that no one is willing, or no one is able, to implement an attack is not a security strategy.

This is not a new idea.

A decade and a half ago, an early hacking group known as L0pht Heavy Industries, of which I was a member, posted a quote from Microsoft — "That vulnerability is entirely theoretical." — to prove the point. The saying came about due to an email exchange in which the L0pht was reporting to Microsoft one of the first buffer overflows discovered in their software. (I later found out that Microsoft, internally, called such bugs a "L0pht-type" vulnerability.) They couldn’t imagine how someone could write an attack tool to take advantage of a stack overflow. No attack tool, to Microsoft, meant exploitation was entirely theoretical.

Times have very much changed at Microsoft, but for other companies that are responsible for parts of our information infrastructure, attacks still need to be demonstrated before security improvements are put in place.

Story continued on Page 2 

Chris Wysopal is co-founder and CTO of Veracode, a provider of on-demand software security testing services. Chris co-authored the password auditing tool L0phtCrack and was a researcher at the security think tank, L0pht Heavy Industries. He has held key roles at @stake and Symantec and is the author of The Art of Software Security Testing: Identifying Security Flaws.
    Digg this story   Add to del.icio.us   (page 1 of 2 ) next 
Comments Mode:


Privacy Statement
Copyright 2010, SecurityFocus