In the April, 2007 edition of Network Computing magazine, Art Wittmann talks about server virtualization, its impact on data center consolidation and the overall drivers and benefits virtualization offers.
What's really interesting is that while he rambles on about the benefits of power, cooling and compute cycle-reclamation, he completely befuddled me with the following statement in which he suggests that:
"While the security threat inherent in virtualization is
real, it's also overstated."
I'll get to the meaty bits in a minute as to why I think this is an asinine comment, but first a little more background on the article.
In addition to illustrating everything wrong with the way in which IT has traditionally implemented security -- bolting it on after the fact rather than baking it in -- it shows the recklessness with which evangelizing the adoption of technology without an appropriate level of security is cavalierly espoused without an overall understanding of the impact of risk such a move creates.
Whittmann manages to do this with an attitude that seeks to suggest that the speed-bump security folks and evil vendors (or in his words: nattering nabobs of negativity) are just intent on making a mountain out of a molehill.
It seems that NWC approaches the evaluation of technology and products in terms of five areas: performance, manageability, scalability, reliability and security. He lists how virtualization has proven itself in the first four categories, but oddly sums up the fifth category (security) by ranting not about the security things that should or have been done, but rather how it's all overblown and a conspiracy by security folks to sell more kit and peddle more FUD:
"That leaves security as the final question. You can bet that everyone who can make a dime on questioning the security of virtualization will be doing so; the drumbeat has started and is increasing in volume.
...I think it's funny that he's intimating that we're making this stuff up. Perhaps he's only read the theoretical security issues and not the practical. While things like Blue Pill are sexy and certainly add sizzle to an argument, there are some nasty security issues that are unique to the virtualized world. The drumbeat is increasing because these threats and vulnerabilities are real and so is the risk that companies that "just do it" are going to discover.
But while the security threat is real --and you should be concerned about it -- it's also overstated. If you can eliminate 10 or 20 servers running outdated versions of NT in favor of a single consolidated pair of servers, the task of securing the environment should be simpler or at least no more complex. If you're considering a server consolidation project, do it. Be mindful of security, but don't be dissuaded by the nattering nabobs of negativity."
As far as I am concerned, this is irresponsible and reckless journalism and displays an ignorance of the impact that technology can have when implemented without appropriate security baked in.
Look, if we don't have security that works in non-virtualized environments, replicating the same mistakes in a virtualized world isn't just as bad, it's horrific. While it should be simpler or at least no more complex, the reality is that it is not. The risk model changes. Threat vectors multiply. New vulnerabilities surface. Controls multiply. Operational risk increases.
We end up right back where we started; with a mess that the lure of cost and time savings causes us to rush into without doing security right from the start.
Don't just do it. Understand the risk associated with what a lack of technology, controls, process, and policies will have on your business before your held accountable for what Whittmann suggests you do today with reckless abandon. Your auditors certainly will.
/Hoff
Thomas and I were barking at each other regarding something last night and today he left a salient and thought-provoking comment that provided a very concise, pragmatic and objective summation of the embedded vs. overlay security quagmire:
I couldn't agree more. Most of the security components today, including those that run in our little security ecosystem, really don't intercommunicate. There is no shared understanding of telemetry or instrumentation and there's certainly little or no correlation of threats, vulnerabilities, risk or disposition.
The problem is bad inasmuch as even best-of-breed solutions usually require box sprawl and stacking and don't necessarily provide for a more secure posture, especially within context of another of Thomas' interesting posts on defense in depth/mesh...
That's changing, however. Our latest generation of NPMs (Network Processing Modules) allow discrete security ISV's (which run on intelligently load-balanced Application Processor Modules -- Intel blades in the same chassis) to interact with and control the network hardware through defined API's -- this provides the first step in that common telemetry such that while application A doesn't need to know about the specifics of application B, they can functionally interact based upon the common output of disposition and/or classification of flows between them.
Later, they'll be able to perhaps control each other through the same set of API's.
So, I don't think we're going to solve the interoperability issue completely anytime soon inasmuch as we'll go from 0 to 100%, but I think that the consolidation of these functions into smaller footprints that allow for intelligent traffic classification and disposition is a first good step.
I don't expect Thomas to agree or even resonate with my statements below, but I found his explanation of the problem space to be dead on. Here's my explanation of an incremental step towards solving some of the bigger classes of problems in that space which I believe hinges on consolidation of security functionality first and foremost.
The three options for reducing this footprint are as follows:
Pros: Supposedly less boxes, better communication between components and good coverage
given the fact that the security stuff is in the infrastructure. One vendor from which you get
your infrastructure and your protection. Correlation across the network "fabric" will ultimately
allow for near-time zoning and quarantine. Single management pane across the Enterprise
for availability and security. Did I mention the platform is already there?
Cons: You rely on a single vendor's version of the truth and you get closer to a monoculture
wherein the safeguards protecting the network put at risk the very assets they seek to protect
because there is no separation of "church and state." Also, the expertise and coverage as well
as the agility for product development based upon evolving threats is hampered by the many
moving parts in this machine. Utility vs Security? Utility wins. Good enough vs. Best of breed?
Probably somewhere in between.
Pros: Reduced footprint, consolidated functionality, single management pane across multiple
security functions within the box. Usually excels in one specific area like AV and can add "good enough" functionality as the needs arise. Software moves up and down the scalability stack depending upon performance needed.
Cons: You again rely on a single vendor's version of the truth. These boxes tend to want to replace switching infrastructure. Many of these platforms utilize ASICs to accelerate certain functions with the bulk of functionality residing in pure software with limited application or network-level intelligence. You pay the price in terms of performance and scale given the architectures of these boxes which do not easily allow for the addition of new classes of solutions to thwart new threats. Not really routers/switches.
Pros: The customer defines best of breed and can rapidly add new security functionality
at a speed that keeps pace with the threats the customer needs to mitigate. Utilizing a scalable and high-performance switching architecture combined with all the benefits
of an open blade-based security application/appliance delivery mechanism gives the best of all
worlds: self-healing, highly resilient, high performance and highly-available while utilizing
hardened Linux OS across load-balanced, virtualized security applications running on optimized
hardware.
Cons: Currently based upon proprietary (even though Intel reference design) hardware for
the application processing while also utilizing proprietary networking switching fabric and
load balancing. Can only offer software as quickly as it can be adapted and tested on the
platforms. No ASICs means small packet performance @ 64byte zero loss isn't as high as
ASIC based packet-forwarding engines. No single pane of management.
I think that option #3 is a damned good start towards solving the consolidation issues whilst balancing the need to overlay syngergistically with the network infrastructure. You're not locked into single vendor's version of the truth and although the hardware may be "proprietary," the operating system and choice in software is not. You can choose from COTS, Open Source or write your own, all in an scaleable platform that is just as much a collapsed switching/routing platform as it is a consolidated blade server.
I think it has the best chance of evolving to solve more classes of problems than the other two at a rate and level of cost-effectiveness balanced with higher efficacy due to best of breed.
This, of course, depends upon how high the level of integration is between the apps -- or at least their dispositions. We're working very, very hard on that.
At any rate, Thomas ended with:
I like NAT. I think this is Paul Francis. The IETF has been hijacked by aliens, actually, and I'm getting a new tattoo: