Story continued from Page 1
First of all, every major security company has people who's job it is to figure out security vulnerabilities from patches. This is not a process that is unknown to the community. They've nearly always have been able to produce proof-of-concepts or exploits on the same day a patch has come out. In some cases, this is because they knew about the patch or vulnerability ahead of time, but in other cases, it's because the state of the art -- human analysis plus Zynamics BinDiff, fuzzers and specialized debuggers such as Immunity Debugger -- is currently good enough to provide quick response to patches.
This means that right now for vulnerabilities that are easy to turn into exploits, all vulnerable and reachable hosts can already be compromised before the automated Microsoft update process gets a chance to run. In the commercial world every patch must be tested thoroughly. Patching a large enterprise's machines within two weeks is considered lightning fast, far slower then the exploit development cycle. Changing Microsoft Update isn't going to change this.
The difference is that now there are few new vulnerabilities that are easy to turn from proof-of-concepts into exploits. Microsoft is largely responsible for making Windows's vulnerabilities harder to exploit when it shifted its strategy to focus on creating more secure software via the Trustworthy Computing Initiative. Turning on stack protection plus heap protection plus data-execution protection by default makes many vulnerabilities impossible to turn into exploits even with a proof-of-concept already done.
Thus, the majority of the vulnerabilities take much longer - perhaps months - to turn into exploits, assuming the vulnerabilities are even reachable in the default state through the default-enabled firewall. The notable exception to the Trustworthy Computing Intiative's general success has been Internet Explorer, which by definition has an extremely broad level of exposure.
Linux exploits, of course, are on average even harder to write than Windows exploits. Defeating GRSecurity, execshield, and modern compiler protections is not for the faint at heart or the cheap at wallet. Even doing exploit quality assurance against all the different flavors of Linux is prohibitively expensive.
Various people who do research for a living in the security industry have noted this disparity between the APEG paper's technical results and its conclusion. They often point out the strange statements in the paper itself, such as the lack of understanding as to what constitutes an exploit and how heap overflows are written. These sorts of misunderstandings are common among people who have not written exploits, the basic shibboleth of our community.
In the academic security environment, where there are social restrictions against writing exploits, you can find many examples of intellectual stagnation held up as valuable research. There is of course the canonical example of the DARPA-funded Gemini paper which protected against all stack overflows by making them heap overflows. In a modern environment such as Windows Server 2003 with Visual Studio's /GS protections, this bizarre technique turns a very difficult to exploit stack overflow into a much more easily (pdf) exploited heap overflow.
Like many people in the security community, I've walked into meetings with deans of computer science departments at well respected universities who want to solve the "firewall problem" by "inventing" firewalls that automatically reboot and clear state every so often. And, of course, there are papers on the economics of vulnerability disclosure, which assume that you can develop -- with math, no less -- a social policy on vulnerability disclosure that is enforceable or somehow valid.
There's great security technology that comes from university research programs. Yet, when it comes to making broad judgments about information security, such as how Windows Update should work, academic environments often lack the context and social background to be relevant.
That's why people who write papers in LaTeX two-column format end up saying the sky has a high negative trajectory, while the rest of us wish they'd stop living in the clouds.