Martin McKeay took exception to some interesting Microsoft research that suggested that the similar methodologies and tactics used by malicious software such as worms/viri, could also be used as an effective distributed defense against them:
Microsoft researchers are hoping to use "information epidemics" to distribute software patches more efficiently.
Milan Vojnović and colleagues from Microsoft Research in Cambridge, UK, want to make useful pieces of information such as software updates behave more like computer worms: spreading between computers instead of being downloaded from central servers.
The research may also help defend against malicious types of worm, the researchers say.
Software worms spread by self-replicating. After infecting one computer they probe others to find new hosts. Most existing worms randomly probe computers when looking for new hosts to infect, but that is inefficient, says Vojnović, because they waste time exploring groups or "subnets" of computers that contain few uninfected hosts.
Despite the really cool moniker (information epidemic,) this isn't a particularly novel distribution approach and in fact, we've seen malware do this. However, it is interesting to see that an OS vendor (Microsoft) is continuing to actively engage in research to explore this approach despite the opinions of others who simply claim it's a bad idea. I'm not convinced either way, however.
I, for one, am all for resilient computing environments that are aware of their vulnerabilities and can actively defend against them. I will be interested to see how this new paper builds off of work previously produced on the subject and its corresponding criticism.
Vojnović's team have designed smarter strategies that can exploit the way some subnets provide richer pickings than others.
The ideal approach uses prior knowledge of the way uninfected computers are spread across different subnets. A worm with that information can focus its attention on the most fruitful subnets – infecting a given proportion of a network using the smallest possible number of probes.
But although prior knowledge could be available in some cases – a company distributing a patch after a previous worm attack, for example – usually such perfect information will not be available. So the researchers have also developed strategies that mean the worms can learn from experience.
In the best of these, a worm starts by randomly contacting potential new hosts. After finding one, it uses a more targeted approach, contacting only other computers in the same subnet. If the worm finds plenty of uninfected hosts there, it keeps spreading in that subnet, but if not, it changes tack.
That being the case, here's some of Martin's heartburn:
But the problem is, if both beneficial and malign software show the same basic behavior patterns, how do you differentiate between the two? And what’s to stop the worm from being mutated once it’s started, since bad guys will be able to capture the worms and possibly subverting their programs.
The article isn’t clear on how the worms will secure their network, but I don’t believe this is the best way to solve the problem that’s being expressed. The problem being solved here appears to be one of network traffic spikes caused by the download of patches. We already have a widely used protocols that solve this problem, bittorrents and P2P programs. So why create a potentially hazardous situation using worms when a better solution already exists. Yes, torrents can be subverted too, but these are problems that we’re a lot closer to solving than what’s being suggested.
I don’t want something that’s viral infecting my computer, whether it’s for my benefit or not. The behavior isn’t something to be encouraged. Maybe there’s a whole lot more to the paper, which hasn’t been released yet, but I’m not comfortable with the basic idea being suggested. Worm wars are not the way to secure the network.
I think that some of the points that Martin raises are valid, but I also think that he's reacting mostly out of fear to the word 'worm.' What if we called it "distributed autonomic shielding?" ;)
Some features/functions of our defensive portfolio are going to need to become more self-organizing, autonomic and intelligent and that goes for the distribution of intelligence and disposition, also. If we're not going to advocate being offensive, then we should at least be offensively defensive. This is one way of potentially doing this.
Interestingly, this dovetails into some discussions we've had recently with Andy Jaquith and Amrit Williams; the notion of herds or biotic propagation and response are really quite fascinating. See my post titled "Thinning the Herd & Chlorinating the Gene Pool"
I've left out most of the juicy bits of the story so you should go read it and churn on some of the very interesting points raised as part of the discussion.
/Hoff
Update: Schneier thinks this is a lousy idea. That doesn't move me one direction or the other, but I think this is cementing my opinion that had the author not used the word 'worm' in his analog the idea might not be dismissed so quickly...
Also, Wismer via a comment on Martin's blog pointed to an interesting read from Vesselin Bontchev titled "Are "Good" Computer Viruses Still a Bad Idea?"
Update #2: See the comments section about how I think the use case argued by Schneier et. al. is, um, slightly missing the point. Strangely enough, check out the Network World article that just popped up which says ""This was not the primary scenario targeted for this research," according to a statement."
Duh.
Recent Comments