Over the last 5 years or so with the pronounced emergence of inexpensive multi-core COTS compute platforms and the rabid evangelism of virtualization, I have debated many times with folks who continue to suggest that "big iron" -- high performance equipment in the compute, network, security and storage domains -- is now "extinct" and that nobody buys bespoke equipment any more.
Many of them argued "All you need is a COTS PC, package up OS/Applications and voila! Instant appliance! Ready for the edge, ready for the core."
Not surprisingly, many of the networking/security/application delivery companies that these folks worked for ultimately introduced custom-engineered hardware solutions, melding their software with custom hardware and COTS elements to produce a piece of big iron of their own...
About a year ago, I wrote a blog on this topic highlighted by a post titled "All your COTS multicore CPU's with non-optimized security software are belong to us," in which I extrapolated some very interesting points regarding Moore's law and the hubris surrounding the gluttony of compute power offered by higher density chipsets without the corresponding software architecture to take advantage of it.
Update: I forgot that I wrote a really good post (this March!) on this same topic titled "I Love the Smell Of Big Iron In the Morning..."
This is a little tickler to that post.
I come from the land of servicing large enterprises, service providers, municipalities and nation states and not the SME.
While there are certainly exceptions to the "rule," and it's reasonable to suggest that my perspective is skewed, I've always been careful to ensure I framed my discussions this way, so debating/contrasting the architectural slants of an SME with a Fortune 10 doesn't really move the discussion along any.
So, sticking with the large enterprise theme, there are two interesting divergent themes emerging: the centralization of compute and storage with the distributed nature of connectivity and information.
Without muddying the water too much about how these scenarios are not all that mutually exclusive, let's stick with the "centralization" theme for a moment.
The mainstream adoption of virtualization as an enabler brings us full-circle back around to the centralized mainframe model* of compute, networking and storage. Now that reasonably reliable, high speed and low latency connectivity is available, centralization of resources makes sense since people can generally get access to the assets they require and the performance to get from point A to point B is for the most part acceptable (and getting more so.)
Once again, the natural consolidation of functionality and platform architecture follows suit -- we're back to using big iron to deliver the balance of cost, resilience, performance and simplified management that goes along with squeezing more from less.
To wit, let's examine some recent virtualization-inspired, big iron plays:
HP introduced [recently] the Proliant BL495c G5, the first blade designed to be a virtual machine-intensive host in a data center rack, along with other virtualization products to enhance the offerings of VMware and Citrix Systems.
As system administrators achieve savings by consolidating 8-10 virtual machines per server, HP is saying its new blade will host up to 32 virtual machines, based on each virtual machine needing a minimum of four Gigabytes of memory. When 16 of the blades are stacked in an HP C7000 enclosure, a total of 512 virtual machines can be run from a single rack, said Jim Ganthier, director of HP BladeSystem, its blade server division.
Network & Storage:
The Cisco Nexus 7000 Series is the flagship member of the Cisco Nexus Family, the first in a new data center class of switching products. The Nexus 7000 is a highly scalable modular platform that delivers up to 15 terabits per second of switching capacity in a single chassis, supporting up to 512 10-gigabits-per-second (Gbps) Ethernet and future delivery of 40- and 100-Gbps Ethernet. Its unified fabric architecture combines Ethernet and storage capabilities into a single platform, designed to provide all servers with access to all network and storage resources. This enables data center consolidation and virtualization. Key components of the unified fabric architecture include unified I/O interfaces and Fibre Channel over Ethernet support to be delivered in the future.
Security:
The Crossbeam X-Series is a carrier-class modular security services switch. The X80 platform is Crossbeam's flagship high-end security services platform for complete network, security. The X80 provides up to 40 Gigabit Ethernet ports and 8 x 10 Gigabit Ethernet ports or up to 64 Fast Ethernet ports and up to 40 Gbps of full duplex firewall throughput. Each network processor module features an integrated 16-core MIPS-64 security processor, a high speed network processor and a custom-designed switch fabric FPGA. The X80 supports up to 10 application processor modules which are based on Intel technology running a hardened version of Linux supporting best-in-breed security software from leading vendors in highly resilient, load-balanced configurations.
These are just a few examples.
Each of these solutions delivers the benefits of hefty amounts of virtualized service, but what you'll notice is that they do so in massive scale, offering consolidated services in high-density "big iron" configurations for scale, resilience, manageability and performance.
In this period of the technology maturity cycle, we'll see this trend continue until we hit a plateau or an inflection point -- whether it is triggered due to throughput, power, heat, latency, density or whatnot. Then we'll start anew and squeeze the balloon once more, perhaps given what I hinted at above with clusters of clouds that define an amporphous hive of virtualized big iron.
But for now, until service levels, reliability, coherence and governance are sorted, I reckon we'll see more big iron flexing it's muscle in the data center.
What about you? Are you seeing the return of big iron in your large enterprise. Perhaps it never left?
I for one welcome my new blinking dark overlord...
/Hoff
* There's even a resurgence of the mainframe itself lately. See IBM's z/10 and Unisys' ClearPath for example.
I couldn't agree more. Most of the security components today, including those that run in our little security ecosystem, really don't intercommunicate. There is no shared understanding of telemetry or instrumentation and there's certainly little or no correlation of threats, vulnerabilities, risk or disposition.
The problem is bad inasmuch as even best-of-breed solutions usually require box sprawl and stacking and don't necessarily provide for a more secure posture, especially within context of another of Thomas' interesting posts on defense in depth/mesh...
That's changing, however. Our latest generation of NPMs (Network Processing Modules) allow discrete security ISV's (which run on intelligently load-balanced Application Processor Modules -- Intel blades in the same chassis) to interact with and control the network hardware through defined API's -- this provides the first step in that common telemetry such that while application A doesn't need to know about the specifics of application B, they can functionally interact based upon the common output of disposition and/or classification of flows between them.
Later, they'll be able to perhaps control each other through the same set of API's.
So, I don't think we're going to solve the interoperability issue completely anytime soon inasmuch as we'll go from 0 to 100%, but I think that the consolidation of these functions into smaller footprints that allow for intelligent traffic classification and disposition is a first good step.
I don't expect Thomas to agree or even resonate with my statements below, but I found his explanation of the problem space to be dead on. Here's my explanation of an incremental step towards solving some of the bigger classes of problems in that space which I believe hinges on consolidation of security functionality first and foremost.
The three options for reducing this footprint are as follows:
Pros: Supposedly less boxes, better communication between components and good coverage
given the fact that the security stuff is in the infrastructure. One vendor from which you get
your infrastructure and your protection. Correlation across the network "fabric" will ultimately
allow for near-time zoning and quarantine. Single management pane across the Enterprise
for availability and security. Did I mention the platform is already there?
Cons: You rely on a single vendor's version of the truth and you get closer to a monoculture
wherein the safeguards protecting the network put at risk the very assets they seek to protect
because there is no separation of "church and state." Also, the expertise and coverage as well
as the agility for product development based upon evolving threats is hampered by the many
moving parts in this machine. Utility vs Security? Utility wins. Good enough vs. Best of breed?
Probably somewhere in between.
Pros: Reduced footprint, consolidated functionality, single management pane across multiple
security functions within the box. Usually excels in one specific area like AV and can add "good enough" functionality as the needs arise. Software moves up and down the scalability stack depending upon performance needed.
Cons: You again rely on a single vendor's version of the truth. These boxes tend to want to replace switching infrastructure. Many of these platforms utilize ASICs to accelerate certain functions with the bulk of functionality residing in pure software with limited application or network-level intelligence. You pay the price in terms of performance and scale given the architectures of these boxes which do not easily allow for the addition of new classes of solutions to thwart new threats. Not really routers/switches.
Pros: The customer defines best of breed and can rapidly add new security functionality
at a speed that keeps pace with the threats the customer needs to mitigate. Utilizing a scalable and high-performance switching architecture combined with all the benefits
of an open blade-based security application/appliance delivery mechanism gives the best of all
worlds: self-healing, highly resilient, high performance and highly-available while utilizing
hardened Linux OS across load-balanced, virtualized security applications running on optimized
hardware.
Cons: Currently based upon proprietary (even though Intel reference design) hardware for
the application processing while also utilizing proprietary networking switching fabric and
load balancing. Can only offer software as quickly as it can be adapted and tested on the
platforms. No ASICs means small packet performance @ 64byte zero loss isn't as high as
ASIC based packet-forwarding engines. No single pane of management.
I think that option #3 is a damned good start towards solving the consolidation issues whilst balancing the need to overlay syngergistically with the network infrastructure. You're not locked into single vendor's version of the truth and although the hardware may be "proprietary," the operating system and choice in software is not. You can choose from COTS, Open Source or write your own, all in an scaleable platform that is just as much a collapsed switching/routing platform as it is a consolidated blade server.
I think it has the best chance of evolving to solve more classes of problems than the other two at a rate and level of cost-effectiveness balanced with higher efficacy due to best of breed.
This, of course, depends upon how high the level of integration is between the apps -- or at least their dispositions. We're working very, very hard on that.
At any rate, Thomas ended with:
I like NAT. I think this is Paul Francis. The IETF has been hijacked by aliens, actually, and I'm getting a new tattoo: