The security community must choose between the red pill of full disclosure or the blue pill of security through obscurity.
While the characters in The Matrix had the red pill to reveal the truth and reality, the tech world has full disclosure.
In my speech, I drew the analogy that the majority of the world?s computing public chooses to ingest the blue pill. They wander through life blissfully ignorant of the potential problems associated with the high-tech environment they?re so enamored with and dependent upon. These people are content to take as gospel the truth as presented by software vendors and industry cartels as the gospel. This extends to quality assessment.
In the software industry?s ideal blue pill world, vendors would hear about potential problems with their products directly from users or researchers. They would also keep this knowledge for themselves: the vast majority of the populace would never become aware of such problems. In the unlikely case that vendors even acknowledged that such problems existed, they would then decide if, when, and how to address them. Consumers would be at the mercy of corporations.
Fortunately, such a world doesn?t exist?or does it?
In the real world products are marketed on claims of security and operational reliability. Anything that undermines that claim - whether true or not - threatens the public perception of the vendor?s product and, subsequently, threatens profits. Vendors have a vested interest in quashing any public discussion of quality concerns that might contradict their marketing propaganda, preferably before they become too widely known. Fortunately, there is a counter to the vendors? vested interests. While the characters in The Matrix had the red pill to reveal the truth and reality, the tech world has full disclosure.
Full disclosure is a security philosophy based on the principle that the details of security vulnerabilities should be made available to everyone so that all users - not just vendors or potential intruders - can benefit from knowledge of potential weaknesses. By making the information public, the security community forces the vendor to be accountable for any vulnerabilities. If weaknesses are discovered, vendors can be pressured into providing security fixes quickly. Furthermore, because the information is common knowledge, all members of the community can take the appropriate steps to protect themselves if they are employing the software in question.
Of course, there are also arguments to be made against full disclosure, the primary one being that it puts dangerous knowledge in the hands of people who may misuse it. This is the worldview favored by advocates of security through obscurity, a creed that espouses that vulnerability information should be revealed only to vendors and a few security experts: if the details of a system are not made publicly available the system will be more secure.
Those who believe in limiting the public discussion of security vulnerabilities will never increase the collective level of information security. They need to wake up from whatever cubicle-induced, dot-com delusional daydream they are having and pause for a reality check. These critics argue that the full-disclosure of vulnerabilities in public forums makes it easy for such information to be ?stolen? and used by ?evil hax0rs? for malicious purposes. The ensuing logic is that to prevent such abuses and ensure the public computing safety, any such knowledge must be kept from the public view and restricted to those ?in the know? ? for a fee, of course. However, attempting to put artificial constraints on the movement of information will not work: it will inevitably spread. People will continue to talk among themselves and develop any number of methods to do so outside of any ?established? or ?preferred? norms, much to the chagrin of those in charge. After all, console-based IRC is still around despite the popularity of commercialized Instant Messaging, isn?t it? Security through obscurity doesn?t diminish the existence of software vulnerabilities, it only minimizes the number of people who are aware of them ? vendors and crackers.
Total-disclosure forums are one of the last examples of what initially made the Internet community special and appealing?before the corporations turned it into the commercial, pop-under, Flash-banner, spam-infested environment it is today. More importantly such forums serve as an objective, third party source of information free from vendor control and censorship ? the cyberspace equivalent of the Consumer Product Safety Commission. These forums force vendors to acknowledge and address security issues (many of them quite serious,) putting them on notice that they will continually be held accountable by a community looking out for the interests of consumers.
This is not to say that full disclosure should operate completely free of constraints. With the concept of ?responsible disclosure? in mind, I propose this simple three-point policy:
- Once a bug/vulnerability/exploit is discovered, it should be reported with evidence to the vendor;
- The vendor then has the opportunity to acknowledge and address the issue in a timely fashion;
- If, after a reasonable period of time, the vendor has not acknowledged
and/or addressed the matter, the information should then be released to the computing community for awareness and resolution.
Under this policy, the reporter does not immediately dump exploit data to the public, but does the responsible thing by giving the vendor a chance to respond and address the issue in a timely manner. If the vendor doesn?t respond ? which has historically been the norm - the vulnerability should then be released to the community to examine and develop possible countermeasures. In such cases, the vendor should be prepared to accept responsibility for ignoring the problem when it was first reported, and not duck behind any number of legal shields to protect its product?s security and software quality.
Responsible disclosure is not about establishing private vendor-only clubs where anyone with a few thousand dollars and a company name can join. Nor should it promote an environment where the mere mention of a vulnerability summons the copyright cops working on behalf of a vendor?s (so-called) quality assurance team. Such actions - evidenced in recent weeks - support the misguided belief that negligence protected by secrecy is both an acceptable risk to security and an acceptable business ethic.
By trying to silence full disclosure, its opponents will not stop the exchange of vulnerability information. Rather, they will simply drive it underground where it will be available only to those who intend to use it for malicious purposes. The only ones to suffer will be the security administrators who will be forced to fend for themselves without the knowledge necessary to do so.
Now, which would you prefer, the blue pill or the red pill?