Open-source or closed-source, it's the same issue. Using other people's software has a lot to do with trust. If you don't trust the right people, you're putting yourself at risk.
There's a pretty good chance that you'll never run into someone at an ATM machine who's going to try to steal your bank card and PIN. The opposite is true of banking online - you'd be hard pressed to find an Internet banker who hasn't been exposed to at least one phishing attack, and we've all received phishing attempts in our email. Online, your security can be compromised at any time and attacks are now commonplace. That's why you have to be really careful about who you trust in information security. It pays to be paranoid.
Onto the mundane: it's good to realize that you also trust a lot of anonymous people on any given day. If you buy lunch, you trust that the person accepting your credit card isn't going to steal your information. You trust that the credit card company will protect you if they do. You trust that the food you get is going to be safe to eat. You might trust a perfect stranger to offer you accurate directions. You trust pretty much everyone that you interact with to some extent or another, day-to-day.
So, outside of your computer, you trust a lot of people to do a lot of things. And most of these interactions don't pose any significant amount of risk to you, so you don't worry about them.
In the digital world, the risks are far more widespread and prevalent.
We start with a quick look at the recent Sony debacle from the perspective of trust. In this case, a big name corporation intentionally and covertly installed a rootkit (and really, that was just the beginning of this ordeal) on an estimated million or more Windows machines worldwide. If you trusted them, and more specifically trusted their software to run on your computer because they were Sony, your security was compromised. Most importantly, before this event occurred and was made public, anyone bringing up the idea of a big company like Sony doing something like this might have sounded like a conspiracy theorist.
I-O Data recently shipped some portable hard drives that were infected with a Windows backdoor. Whether this was the result of poor security on a development network, or an intentional ploy by a developer, we will probably never know.
There are many more examples. In the past, even big name companies like Microsoft shipped CDs infected with the WM/Wazzu.A macro virus, and also hosted infected documents on their Web site. This doesn't happen very often, but it's still a concern.
Problems with other peoples security isn't just limited to closed-source software, however. There have been several high profile (and much more insidious) incidents associated with open source projects too. Way back in 2002, numerous open source projects such as OpenSSH, Sendmail, and tcpdump were secretly modified to include a backdoor. Some of these modifications remained unknown to the public, or the developers of the applications, for either a day or up to nearly a week. They were targeted attacks and for that short duration, they were successful.
These open-source compromises were later made public, of course, so that anyone who may have been exposed to the tainted software could initiate incident response. But why haven't we heard about any similar, targeted attacks against big companies making closed-source software? Maybe it hasn't happened. Or maybe it has indeed happened, but the ensuing public relations nightmares from publishing the information was avoided by keeping quiet about the incident. That's a scary thought.
A comfortable level of trust - some guidelines
Ultimately, we have to put our trust in other peoples software, be it closed or open source, independently or commercially developed. This is always left up to you and your organization. Not only do we have to trust people to write secure code in the first place, but we also have to trust that they're developing in a secure environment. And we need to trust the developers themselves to not infect or contaminate the code intentionally.
For me personally, there are many factors involved in deciding whose software to trust. For starters, I like to know that the people developing the software are security conscious and talented developers - however this is a really difficult thing to gauge, and only through use of a product and exposure to the project developers (and ideally, the source code) can you really get a solid understanding of this. The fact that I can talk to the NetBSD developers just as easily as anyone else through e-mail really makes me feel comfortable. Reading over their public discussions and having anonymous CVS access to the entire operating system really lets me get acquainted with their development process and code, which I will look at. But I might be the exception. Let's admit it: most people don't read the source code.
How do you evaluate a project or software package if you don't (or can't) look at the code?
A project with a dedicated security team and published contact information, plus a history of well-written and prompt security advisories, is also important - but it's not always necessary for smaller projects. Many of the BSD-based operating systems and Linux distributions have detailed and formal security advisories, which is a pretty good indication that they take security seriously. Even though a large portion of Linux users receive a packaged kernel from a specific distribution, I'd still like to see the Linux kernel developers have their own central point of contact for security, and release their own security advisories.
A solid track record is also important. Let's look at one extreme: qmail. When choosing mail server software, no one is going to question the security of qmail. It has a bug-free history (well, almost), and is one of those projects that is synonymous with security. It's popular, it's been audited by some very talented people, and no one has ever found a show-stopper security vulnerability. On the other extreme, there are lots of examples - too many to mention.
The type of vulnerabilities being discovered in a product can also be indicative of its quality. When auditing software, simple bugs are likely to be found first, so the discovery of obscure and circumstantial vulnerabilities may indicate that past auditing has been thorough, and that the software is devoid of the basic and embarrassing bugs that can affect poorly written software. If a project has been plagued by basic buffer overflow or format string vulnerabilities, that might be a pretty good indication that the underlying code wasn't written with any sort of security in mind.
Peer review of code additions or modifications is also great. While reading through a recent change log for the Linux kernel, one thing that really caught my eye was that all of the committed patches contained three or four people that "signed off" on the patch. This means that you don't have to trust that patches are being committed without being formally reviewed by other people. A developer would need some really crafty code to slip in a vulnerability unnoticed.
As you can see, evaluating trust is a pretty abstract process. Who can you trust to develop your software? The answers are pretty specific to each corporation and individual that asks this question. What's important, though, is that we spend the time to figure out who we're trusting, why we're trusting them, and what we're trusting them to do. We do our research and learn as much as we can about the software we use.