, SecurityFocus 2000-04-17
Is Open Source really more secure than closed? Elias Levy says there's a little security in obscurity.
There have been plenty of security vulnerabilities in Open Source Software that were discovered, not by peer review, but by black hats.
Advocates derive their dogmatic faith in the implicit security of Open Source code from the concept of "peer review," a cornerstone of the scientific process in which published papers and theories are scrutinized by experts other than the authors. The more peers that review the work, the less likely it is that it will contains errors, and the more likely it is to become accepted.
Open Source apostles believe that releasing the source code for a piece of software subjects it to the same kind of peer review as a quantum physics theory published in a scientific journal. Other programmers, the theory goes, will review the code for security vulnerabilities, reveal and fix them, and thus the number of new vulnerabilities introduced and discovered in the software will decrease over time when compared to similar closed source software.
It's a nice theory, and in the ideal Open Source world, it would even be true. But in the real world, there are a variety of factors that effect how secure Open Source Software really is.
If Open Source were the panacea some think it is, then every security hole described, fixed and announced to the public would come from people analyzing the source code for security vulnerabilities, such as the folks at OpenBSD, the Linux Auditing Project, or the developers or users of the application.
But there have been plenty of security vulnerabilities in Open Source Software that were discovered, not by peer review, but by black hats. Some security holes aren't discovered by the good guys until an attacker's tools are found on a compromised site, network traffic captured during an intrusion turns up signs of the exploit, or knowledge of the bug finally bubbles up from the underground.
Why is this? When the security company Trusted Information Systems (TIS) began making the source code of their Gauntlet firewall available to their customers many years ago, they believed that their clients would check for themselves how secure the product was. What they found instead was that very few people outside of TIS ever sent in feedback, bug reports or vulnerabilities. Nobody, it seems, is reading the source.
The fact is, most open source users run the software, but don't personally read the code. They just assume that someone else will do the auditing for them, and too often, it's the bad guys.
In the scientific world, peer review works because the people doing the reviewing possess a comparable, or higher, technical caliber and level of authority on the subject matter than the author.
It is generally true that the more people reviewing a piece of code, the less likely it is the code will have a security flaw. But a single well-trained reviewer who understands security and what the code is trying to accomplish will be more effective than a hundred people who just recently learned how to program.
Old versions of the Sendmail mail transport agent implemented a DEBUG SMTP command that allowed the connecting user to specify a set of commands instead of an email address to receive the message. This was one of the vulnerabilities exploited by the notorious Morris Internet worm.
Sendmail is one of the oldest examples of open source software, yet this vulnerability, and many others, lay unfixed a long time. For years Sendmail was plagued by security problems, because this monolithic programs was very large, complicated, and little understood but for a few.
Vulnerabilities can be a lot more subtle than the Sendmail DEBUG command. How many people really understand the ins and outs of a kernel based NFS server? Are we sure its not leaking file handles in some instances? Ssh 1.2.27 is over seventy-one thousand lines of code (client and server). Are we sure a subtle flaw does not weakening its key strength to only 40-bits?
All the benefits of source code peer review are irrelevant if you can not be certain that a given binary application is the result of the reviewed source code.
Ken Thompson made this very clear during his
Thompson modified the UNIX C compiler to recognize when the login program was being compiled, and to insert a back door in the resulting binary code such that it would allow him to login as any user using a "magic" password.
Anyone reviewing the compiler source code could have found the back door, except that Thompson then modified the compiler so that whenever it compiled itself, it would insert both the code that inserts the login back door, as well as code that modifies the compiler. With this new binary he removed the modifications he had made and recompiled again.
He now had a trojaned compiler and clean source code. Anyone using his compiler to compile either the login program , or the compiler, would propagate his back doors.
The reason his attack worked is because the compiler has a bootstrapping problem. You need a compiler to compile the compiler. You must obtain a binary copy of the compiler before you can use it to translate the compiler source code into a binary. There was no guarantee that the binary compiler you were using was really related to the source code of the same.
Most applications do not have this bootstrapping problem. But how many users of open source software compile all of their applications from source?
A great number of open source users install precompiled software distributions such as those from RedHat or Debian from CD-ROMs or FTP sites without thinking twice whether the binary applications have any real relationship to their source code.
While some of the binaries are cryptographically signed to verify the identity of the packager, they make no other guarantees. Until the day comes when a trusted distributor of binary open source software can issue a strong cryptographic guarantee that a particular binary is the result of a given source, any security expectations one may have about the source can't be transferred to the binary.
Whatever potential Open Source has to make it easy for the good guys to proactively find security vulnerabilities, also goes to the bad guys.
It is true that a black hat can find vulnerabilities in a binary-only application, and that they can attempt to steal the source code to the application from its closed source. But in the same amount of time they can do that, they can audit ten different open source applications for vulnerabilities. A bad guy that can operate a hex editor can probably manage to grep source code for 'strcpy'.
Security through obscurity is not something you should depend on, but it can be an effective deterrent if the attacker can find an easier target.
So does all this mean Open Source Software is no better than closed source software when it comes to security vulnerabilities? No. Open Source Software certainly does have the potential to be more secure than its closed source counterpart.
But make no mistake, simply being open source is no guarantee of security.