Story continued from Page 2
Part 3: Prominent researchers discuss the disclosure process
SF: When you find an exploitable vulnerability, what makes you choose the type of disclosure process (if any)?
David Litchfield: I see there being two reasons for disclosure; firstly when a vendor is not taking responsibility for their product and they are not going to issue a patch or are wasting time in delivering a patch. I'm all for giving a vendor as much time as they need to fix the flaw providing they are making a best endeavor effort to fix the flaw. If however the vendor is wasting time and unnecessarily leaving their customers exposed to risk then disclosure should be threatened once all else fails. The second reason for disclosure is for education. It's important that others can see where mistakes are being made so they can avoid the same issues. As there is no real pressure to disclose for educational reasons, I and NGS put a post-patch three month embargo on the details of the flaws we find. This gives time for the patch to be installed before the details are made public.
H D Moore: It depends on how the vulnerability was found, who else knows about it, what the impact of the vulnerability is, and whether the vendor has a history of being responsive. The vendor is notified in nearly every case, with the exceptions being low-risk denial of service or information disclosure flaws. I usually wait until I have working exploit code before notifying the vendor, since an exploit makes it easier for the vendor to reproduce the flaw and prioritize patch development.
If another researcher was involved in the discovery of the bug, the disclosure process also depends on their personal views and the policies of their employer. A great example of this is the recent Windows Mailslot vulnerability (MS06-035). Pedram Amini and I found this issue while working on a related problem, and his employer (TippingPoint) determined the disclosure process and timeline. I agreed to wait a period of time after the patch was available before releasing the exploit code.
For bugs I find on my own, the disclosure timeline depends on the impact of the vulnerability and the response from the vendor. While investigating the recent RASMAN vulnerabilities (MS06-25), I stumbled across another flaw that was not patched and resulted in a crash of the svchost.exe that contains other critical services. This bug requires valid authentication credentials and results in a forced reboot in the most severe case. Microsoft was notified and they decided it would not be addressed until the next major service pack. An exploit module capable of triggering a crash in svchost.exe has been included in the 3.0 Beta 1 of the [Metasploit] Framework.
There have been a few cases where the vendor silently patched a flaw before I had a chance to finish the exploit code or report it. The Quicktime application shipped with Mac OS X contained a format string flaw that could be triggered through the Safari web browser. It was possible to trigger an arbitrary memory write by redirecting the user to a quicktime:// URL containing format specifiers. I considered the bug a low priority and never finished the exploit or notified Apple. Sometime in the last couple of months, the bug was silently patched in a security update.
Michal Zalewski: I do believe in disclosure, particularly when the issue at hand is in some way interesting, unique, or may contribute to our understanding of software security. It's good to share. Because of this, I almost always disclose my research, with the exception of issues too trivial and too limited to warrant any public attention. I'm glad I can afford this luxury.
I'm also a believer in full and swift disclosure. This is not to say I want to arm cyber-hooligans - that's unnecessary: they seem to do better and better despite creeping limits and delays in vulnerability disclosure. By the way, some experts argue it's a good thing: kids' annoying and disruptive pranks have the side effect of raising awareness and improving defenses, to a point where a global disaster is unlikely; weren't it for what we learned since 1995 or so, a lone malicious fellow with an agenda could bring our economy to a grinding halt. Around that year, the infrastructure was vulnerable to so many awfully obvious flaws (look at the exploits back then!)... in 1995, we didn't depend on the Internet that much, of course, but would vendors improve the security of their products in any way without ten years of full disclosure? Or would every single service go down in flames after receiving a single command with more than 255 bytes of text?
In any case - what I want to achieve is give the customer and open-source security vendors equal chances. They need information, even if certain vendors led us to believe otherwise.
I see no harm in giving the vendor a fair chance to comment on the problem (this is often beneficial for me, as well) and provide a fix to any immediately exploitable vulnerability. This is why I usually give the vendor an advance notice, and am willing to wait for a week or two if they're responsive and friendly. The most important exception is dealing with a vendor who's either speaking or acting against disclosure of security flaws altogether, or is routinely abusing the "grace period" granted by researchers to unreasonably delay fixing of reported problems.
It is argued that such an early disclosure renders users vulnerable to attacks, but in reality, they were vulnerable all the time - only now, they know about it, the security software developers know about it - and all the concerned parties and may convince the vendor to implement proactive security measures, and cooperate with the infosec community.
I do not claim this to be a morally superior way of handling disclosure; it's simply how I'm doing it, what I believe in, and I sure hope my kids would live in a society where it is acceptable to openly discuss the failure modes of technologies we depend on.