The Myspace Web worm used a simple vulnerability and XSS to propagate, and it might be a sign of things to come.
Don't get me wrong, I think that these vulnerabilities are a significant concern for any individual who doesn't patch (these people do exist), or for companies that are slow to patch. But lately, I think that people are a lot more aware of applying their security updates - or the fact that these often happen automatically now - and the pool of vulnerable systems for a worm is much smaller.
Automated worms targeting windows vulnerabilities, that's so circa 2003. My personal belief is that malicious code trends tend to follow a 3 year pattern, and since it's almost 2006, what's coming down the pipe?
I believe we saw one possible direction of worm evolution recently with Myspace. It got some press, but I don't know if we have fully appreciated the significance of what happened. As background, Myspace is a portal site that allows people to have a profile, link to friends, essentially an online method of networking. Someone found a way to manipulate his profile in order to have other people "link" to him as a friend. This manipulation was viral, and by the end, the system was shutdown until Myspace had fixed the vulnerability that allowed this to happen. Meanwhile, the author, Samy, now had many, many friends.
New worm trends
This worm, which some are calling it the Samy worm, works because Myspace attempted to filter out "bad" HTML and other unwanted things, but still allow people to insert their own HTML to make their profile unique and stylish. Additionally, Myspace allows people to make links between themselves and others. The problem appears to be that some web browsers will happily render HTML that has been misshapen and malformed enough to bypass the filtering process.
Is seen as
It's an incredibly simple error. Samy, our worm author, wasn't actually trying to write a worm, he was trying to see if he could get past the Myspace limits to make himself more popular. He wasn't embezzling money, wasn't assembling a botnet, and wasn't even trying to get systems to hit Microsoft with a Denial of Service (DoS) attack - one of the more popular uses for worms. He was just trying to make friends.
But he still wrote a worm. It spread like a virus, and you can bet that someone, somewhere is now trying to figure out how to similarly use Cross-Site-Scripting (XSS) to create a worm to embezzle money, or DoS someone.
Why is this going to be the new front in the battle of the worms? It sure seems like there is a simple solution to this. The Website should simply filter the content better. Or maybe one can place the blame at the foot of the Web browsers for allowing the HTML to be manipulated and malformed, yet still be rendered - however the history of the Web in part explains why this is the case. Regardless, in both cases another security practitioner would have some security advice that might help.
- One clear symptom that you've got a case of "Default Permit" is when you find yourself in an arms race with the hackers. It means that you've put yourself in a situation where what you don't know can hurt you, and you'll be doomed to playing keep ahead/catch-up.
The opposite of "Default Permit" is "Default Deny" and it is a really good idea. It takes dedication, thought, and understanding to implement a "Default Deny" policy, which is why it is so seldom done. It's not that much harder to do than "Default Permit" but you'll sleep much better at night.
First: Complexity. Simple things are a lot less likely to have significant problems. The Web is truly emerging into a platform where numerous different and complex technologies all interact. The Samy worm depended on the filtering of Myspace not being the same as what a Web browser will render. As more technologies are developed and deployed, more complexity will result. This will result in more problems, or put another way, more vulnerabilities. In this case, one of the misbehaving components was a web browser, but it is possible that future worms won't require a web browser at all. Think of blog trackbacks.
Second: Size of a target. Metcalfe's law suggests that there is greater value in a network the larger a network becomes. Blog tracking software, personal networking sites like Myspace, and Google-like aggregation sites are all examples of these and are built around building more connections. As they continue to grow, and as the mechanism to create those links continues to become more formalized, simply providing a text will not be sufficient. Including rich content will become standard.
Third: Lack of direct risk to those who can do the most to fix the problem. The Samy worm occurred because Web browser producers don't see a problem in allowing badly formed HTML to be parsed and run, and Myspace didn't see a problem in allowing some HTML for markup purposes. It wasn't until Myspace was in the midst of a worm that they "fixed" the problem. How thoroughly did they fix it? Are there other ways to bypass their filtering? They clearly see value in allowing people to customize their profiles, and they are willing to patch problems on a one-by-one basis. This sure seems like a Default Permit option to me.
Windows viruses and worms achieved notoriety because they took advantage of these same three legs. Will the same thing happen as the Web emerges into a true application platform? While only time will tell, I do believe this is going to be the case.