Jon Oltsik crafted an interesting post today regarding the bifurcation of opinion on where the “intelligence” ought to sit in a networked world: baked into the routers and switches or overlaid using general-purpose compute engines that ride Moore’s curve.
I think that I’ve made it pretty clear where I stand. I submit that you should keep the network dumb, fast, reliable and resilient and add intelligence (such as security) via flexible and extensible service layers that scale both in terms of speed but also choice.
You should get to define and pick what best of breed means to you and add/remove services at the speed of your business, not the speed of an ASIC spin or an acquisition of technology that is neither in line with the pace and evolution of classes of threats and vulnerabilities or the speed of an agile business.
The focal point of his post, however, was to suggest that the real issue is the fact that all of this intelligence requires exposure to the data streams which means that each component that comprises it needs to crack the packet before processing. Jon suggests that you ought to crack the packet once and then do interesting things to the flows. He calls this COPM (crack once, process many) and suggests that it yields efficiencies -- of what, he did not say, but I will assume he means latency and efficacy.
So, here’s my contentious point that I explain below:
Cracking the packet really doesn’t contribute much to the overall latency equation anymore thanks to high-speed hardware, but the processing sure as heck does! So whether you crack once or many times, it doesn’t really matter, what you do with the packet does.
Now, on to the explanation…
I think that it’s fair to say that many of the underlying mechanics of security are commoditizing so things like anti-virus, IDS, firewalling, etc. can be done without a lot of specialization – leveraging prior art is quick and easy and thus companies can broaden their product portfolios by just adding a feature to an existing product.
Companies can do this because of the agility that software provides, not hardware. Hardware can give you scales of economy as it relates to overall speed (for certain things) but generally not flexibility.
However, software has it’s own Moore’s curve or sorts and I maintain that unfortunately its lifecycle, much like what we’re hearing @ Interop regarding CPU’s, does actually have a shelf life and point of diminishing return for reasons that you're probably not thinking about...more on this from Interop later.
John describes the stew of security componenty and what he expects to see @ Interop this week:
I expect network intelligence to be the dominant theme at this week's Interop show in Las Vegas. It may be subtle but its definitely there. Security companies will talk about cracking packets to identify threats, encrypt bits, or block data leakage. The WAN optimization crowd will discuss manipulating protocols and caching files, Application layer guys crow about XML parsing, XSLT transformation, and business logic. It's all about stuffing networking gear with fat microprocessors to perform one task or another.
That’s a lot of stuff tied to a lot of competing religious beliefs about how to do it all as Jon rightly demonstrates and ultimately highlights a nasty issue:
The problem now is that we are cracking packets all over the place. You can't send an e-mail, IM, or ping a router without some type of intelligent manipulation along the way.
<nod> Whether it’s in the network, bolted on via an appliance or done on the hosts, this is and will always be true. Here’s the really interesting next step:
I predict that the next bit wave in this evolution will be known as COPM for "Crack once, process many." In this model, IP packets are stopped and inspected and then all kinds of security, acceleration, and application logic actions occur. Seems like a more efficient model to me.
To do this, it basically means that this sort of solution requires Proxy (transparent or terminating) functionality. Now, the challenge is that whilst “cracking the packets” is relatively easy and cheap even at 10G line rates due to hardware, the processing is really, really hard to do well across the spectrum of processing requirements if you care about things such as quality, efficacy, and latency and is “expensive” in all of those categories.
The intelligence of deciding what to process and how once you’ve cracked the packets is critical.
This is where embedding this stuff into the network is a lousy idea.
How can a single vendor possibly provide anything more than “good enough” security in a platform never designed to solve this sort of problem whilst simultaneously trying to balance delivery and security at line rate?
This will require a paradigm shift for the networking folks that will either mean starting from scratch and integrating high-speed networking with general-purpose compute blades, re-purposing a chassis (like, say, a Cat65K) and stuffing it with nothing but security cards and grafting it onto the switches or stack appliances (big or small – single form factor or in blades) and graft them onto the switches once again. And by the way, simply adding networking cards to a blade server isn't an effective solution, either. "Regular" applications (and esp. SOA/Web 2.0 apps) aren't particularly topology sensitive. Security "applications" on the other hand, are wholly dependent and integrated with the topologies into which they are plumbed.
It’s the hamster wheel of pain.
Or, you can get one of these which offers all the competency, agility, performance, resilience and availability of a specialized networking component combined with an open, agile and flexible operating and virtualized compute architecture that scales with parity based on Intel chipsets and Moore’s law.
What this gives you is an ecosystem of loosely-coupled BoB security services that can be intelligently combined in any order once cracked and ruthlessly manipulated as it passes through them governed by policy – and ultimately dependent upon making decisions on how and what to do to a packet/flow based upon content in context.
The consolidation of best of breed security functionality delivered in a converged architecture yields efficiencies that is spread across the domains of scale, performance, availability and security but also on the traditional economic scopes of CapEx and OpEx.
Cracking packets, bah! That’s so last Tuesday.