Digg this story   Add to del.icio.us   (page 1 of 2 ) next 
Time to Shield Researchers
Oliver Day, 2009-03-20

Research is the backbone of the security industry but the legal climate has become so adverse that researchers have had to worry about injunctions, FBI visits, and even arrest.


Corporations have threatened security researchers with nearly every area of law.

IOActive's Chris Paget was silenced by HID corporation after he found a flaw in the RFID technology used by their cards, with the company claiming that Paget was violating their patent by disclosing the technical plans to create a card cloning device. Researcher Mike Lynn was threatened using trade secret law by Cisco, who claimed that Lynn's work exposed critical aspects of their internal operating system that would give their competitors an unfair advantage. And, NMap creator Fyodor was silenced using contract law when MySpace contacted his DNS registrar, irate over Fyodor's publishing compromised MySpace passwords to Seclists.org.

You would think that companies would learn. Strong-armed attempts to stifle disclosures have consistently resulted in embarrassment for companies and can turn other areas of the hacker community against them. In Cisco's case, hackers retaliated on behalf of Lynn by compromising their main customer service website and likely exposing passwords for registered users. The real danger isn't just in the embarrassment of future attacks. Suppressing research is almost guaranteed to foment interest in the underground, while all public efforts will be stifled by threat of lawsuit. This is the worst of all possible situations.

These actions by companies are a detriment to the public good, since the market lacks significant punishments for companies that create vulnerable software. While some have proposed that companies be held liable for bugs in software, that seems increasingly unlikely.

Instead, we need to look to strengthen the protections given to security researchers to enable them to continue the work they do without fear of legal action.

In the most recent case to make headlines, Alexander Sotirov and his colleagues developed a cryptographic attack that underscored weaknesses in the certificate authority system that relies on MD5-signed site certificates. (LINK) This was a significant find which could be used with disastrous consequences in the wrong hands, especially if coupled with cache poisoning, such as the DNS flaw found by Dan Kaminsky. In the wrong hands, an attacker could impersonate any bank or online retailer and steal identities, inflict javascript malware and other serious attacks.

Despite their knowledge of this significant vulnerability, the researchers were less worried about attackers finding out and more worried about being sued by Internet service providers embarrassed by the flaw. To head off the problem, they — with the help of the Electronic Frontier Foundation — were able to get Microsoft and the Mozilla Foundation to sign non-disclosure agreements.

  The disclosure debate is well known by now. The arguments haven't changed but the marketplace has. A Windows exploit is worth $10,000 or more, if it is serious enough. That isn't black market money that has to be laundered. That's legitimate income that is taxed like the dollars everyone else in the country earns. And, whatever the value of a bug on the grey open market, it is reasonable to assume a higher price in the black market.

However, finding legitimate bugs typically does not pay well. A rock star level researcher that can churn out 3 or 4 critical Windows bugs may earn around $30,000 to $40,000 from clearinghouse firms like TippingPoint. This is not a very large amount of cash for work that requires an extremely high level of skill and huge amounts of personal time behind the keyboard. Even if the researcher were able to cash in on other bounties it would be difficult for them to earn as much as an entry level position in an IT department.

When vulnerability research started to take off in the 1990s researchers traded vulnerabilities for glory. As researchers banded together to form companies, the trade was increased market awareness and publicity to leverage product sales. Monetizing vulnerabilities has become a lot easier due to public efforts by online gangs. When companies push security researchers away they are pushing them towards this ecosystem. Microsoft learned this lesson a long time ago and works with researchers by giving them credit. They give them credit in their advisories and respond more favorably when someone tells them they have knowledge of a critical bug in their software.

Story continued on Page 2 



Oliver Day is a researcher at the Berkman Center for Internet and Society where he is focused on the StopBadware project. He was formerly a security consultant at @stake and eEye Digital Security. He has also been a staunch advocate of the disclosure process and providing shielding for security researchers.
    Digg this story   Add to del.icio.us   (page 1 of 2 ) next 
Comments Mode:
Time to Shield Researchers 2009-03-23
Anonymous
Some Companies Do Have Public Policies 2009-03-23
Andy Steingruebl (1 replies)
Time to Shield Researchers 2009-03-24
Kyle H
Time to Shield Researchers 2009-03-27
Anonymous


 

Privacy Statement
Copyright 2010, SecurityFocus