Open-source software relies on the confidence we have that project leaders can detect and respond to security compromises. Here's why that needs to change.
In the rush to crack the champagne and celebrate the community model working the way it was intended, the bigger issue of giving the power of verification to the user has been overlooked.
Under no circumstance were you to ever let your perimeter get overrun. If you were forced to defend it, you fought with superhuman strength. If you thought your perimeter was going to be overrun, you called any and everybody that you thought could help: air support, artillery, armor... anybody.
We've seen exactly this kind of exercise in the open-source world in the last few weeks, though not necessarily from the same school of training or thought. Two high-profile projects experienced compromises. They announced it to the community, essentially letting everybody know that their perimeters had been overrun, and that they needed all the help they could get.
Some people would have you believe this is monumental or out of the ordinary -- a group that distributes software experiencing a compromise, then letting everybody know about it and warning of the potential risks. Those that prance about in Penguin-embroidered cheerleader tops and yellow and black tutus suggest between pom-pom waves that no commercial vendor would ever be as candid.
I think that's wrong. When you get owned, somebody is going to announce it, so there's no reason for anyone -- commercial vendors included -- to try and keep it under wraps. People talk. This is our nature, and inevitably the gossip subway is going to go rumbling down the tracks, out of control, until it breaks through the surface.
Moreover, open projects are in a situation that uniquely requires immediate disclosure of a compromise. A project that does not publicly admit a compromise not only risks the integrity of the project, but also risks the trust that users put in the project. And in current form, open-source projects are built entirely on trust.
This trust in open-source generally springs from the practice of distributing the source code for applications. But users who download from the project can't be assured that the application hasn't been tampered with, unless they actually read through the source code. There's no guarantee that the source is actually the source that was intended.
Trust But Verify
The intelligence community has a maxim they live by: Trust, but verify. In a nod to this principle, many open-source projects provide some means of verifying the validity of a particular application. Usually, this comes in the form of hashes for known good packages, wrapped in a PGP-signed file and made available somewhere in a convenient location-- typically from the same location from which the software can be downloaded.
But sadly, there's no way to be assured of the integrity of the source when it was on the developers system, then uploaded to and packaged on the project servers. These steps in the development process usually happen before the hash was generated. We must instead depend upon the integrity of the project servers and developers systems, trusting that all parties have done their job in detecting intrusions.
That means there is no way for us to determine that files in the CVS tree are those authored in entirety by legitimate contributors to the project. You see, most, if not all, projects do not have a policy of requiring irrefutable proof of authorship on source files.
This is a huge weakness, and the primary reason the open-source community shouldn't be quite so self-congratulatory in the wake of the recent intrusions. In the rush to crack the champagne and celebrate the community model working the way it was intended, the bigger issue of giving the power of verification to the user has been overlooked.
It is not as though the tools to prevent this type of attack do not exist: they not only exist, but they're freely available in the same open-source model as most of the operating systems and applications they are designed to integrate with. GnuPG is one such example.
By omitting any means to verify source code origins, while relying on the hopelessly insecure delivery mechanisms used by most projects, we've created a model that is completely broken. We're forced to trust the author of a particular file to upload a file to a project server securely, trust the project to keep the server locked down, trust the downloading process, then with no independent means of verifying the integrity of the file itself, we must trust the file.
And when the project servers are invaded, we must trust the project maintainers to let us know whether or not we were exposed to compromised files.
Until open-source projects become cryptography aware and start signing the files committed to CVS, and until they start delivering files through some secure means such as SSL, all we can do is trust them. We'll know that open-source is finally taking security seriously when we don't have to trust, because we can verify instead.