Story continued from Page 2
What methods did you use to hunt bugs?
Johnathan Nightingale: It would be foolhardy for us to use a single approach (attackers are not so well-behaved) so we tend to pick up any methodology with promise and see what it can do for us. Window has been a great driver here, introducing more targeted penetration testing and more through in-house security reviews as well. We have found fuzzing to be extremely valuable for finding certain kinds of bugs and have developed several tools of our own, including jsfunfuzz which I mentioned earlier. Static analysis tools have been more hit and miss for us but I have the sense that the value of these tools will vary a great deal from one project to the next. In our case, a lot of patterns were flagged as potential problems which turned out to be non-issues after case-by-case inspection.
One of the most valuable tools we have is our manual and automated testing frameworks. We run 55,000 automated tests in 6 different frameworks on 4 platforms at least 20 times a day. Obviously this is a huge deal in terms of avoiding feature regressions, but it also protects against security regressions and it feels now like it would be impossible to do without. The great thing about automated tests in particular is that they are purely additive -- other than negligible time cost for test runs, the incremental cost for adding another test is just the time to write it once, but the protection it offers lasts forever. We've had single tests catch half a dozen regressions in totally unrelated pieces of code.< /p>
Johnathan Nightingale: Obviously we tread carefully when we are changing the way the browser permits web pages to behave, since there is a lot of web out there, and we don't want to break it. Nevertheless we have made some small, but smart, changes to the way certain things work. We no longer execute unclosed <script> tags, for instance, because doing otherwise allows a DoS attack to trigger unexpected and potentially dangerous behavior by interrupting pages when scripts are partially loaded. We've also tightened the same origin policy rules around local files so that they can't walk directory trees and send arbitrary content to bad people in shady places.
One of the more exciting changes though, is something that we're still working on, and something where we'd welcome your readers' input. Brandon Sterne, building on earlier work by Gervase Markham, is working on defining a "Site Security Policy" proposal to control things like cross-origin script loading on an opt-in basis. Giving web sites the ability to state explicitly which cross-site traffic is expected and permitted, and which to exclude as rogue would be a big step forward in the battle against XSS and CSRF attacks. It's still in the early stages and obviously in the long term we'd want to get other browsers into the process, but in the meantime, I would encourage your readers to check it out.