I read James Urquhart's first blog post written under the Cisco banner today titled "The network: the final frontier for cloud computing" in which he describes the evolving role of "the network" in virtualized and cloud computing environments.
The gist of his post, which he backs up with examples from Greg Ness' series on Infrastructure 2.0, is that in order to harness the benefits of virtualization and cloud computing, we must automate; from the endpoint to the underlying platforms -- including the network -- manual processes need to be replaced by automated capabilities:
When was the last time you thought “network” when you heard
“cloud computing”? How often have you found yourself working out
exactly how you can best utilize network resources in your cloud
applications? Probably never, as to date the network hasn’t registered
on most peoples’ cloud radars.
This is understandable, of course, as the early cloud efforts try to
push the entire concept of the network into a simple “bandwidth”
bucket. However, is it right? Should the network just play dumb and
let all of the intelligence originate at the endpoints?
...
The writing is on the wall. The next frontier to get explored in
depth in the cloud world will be the network, and what the network can
do to make cloud computing and virtualization easier for you and your
organization
If you walked away from James' blog as I did initially, you might be left with the impression that this isn't really about "the network" gaining additional functionality or innovative capabilities, but rather just tarting up the ability to integrate with virtualization platforms and automate it all.
Doesn't really sound all that sexy, does it. Well, it's really not, which is why even today in non-virtualized environments we don't have very good automation and most processes still come down to Bob at the helpdesk. Virtualization and cloud are simply giving IT a swift kick in the ass to ensure we get a move on to extract as much efficiency and remove as much cost from IT as possible.
Don't be fooled by the simplicity of James' post, however, because there's a huge moose lurking under the table instead of on top of it and it goes toward the fundamental crux of the battle brewing between all those parties interested in becoming your next "datacenter OS" provider.
There exists one catalytic element that produces very divergent perspectives in IT around what, where, why and who automates things and how, and that's the very definition of "the network" in virtualized and cloud models.
How someone might describe "the network" as either just a "bandwidth bucket" of feeds and speeds or an "intelligent, aware, sentient platform for service delivery" depends upon whether you're really talking about "the network" as a subset or a superset of "the infrastructure" at large.
Greg argues that core network services such as IP adddress management, DNS, DHCP, etc. are part of the infrastructure and I agree, but given what we see today, I would say that they are part-in-parcel NOT a component of "the network" -- they're generally separate and run atop the plumbing. There's interaction, for sure, but one generally relies upon these third party service functions to deliver service. In fact, that's exactly the sort of thing that Greg's company, Infoblox, sells.
This contributes to part of this definitional quandary.
Now we have this new virtualization layer injected between the network and the rest of the infrastructure which provides a true lever and frictionless capability for some of this automation but further confuses the definition of "the network" since so much of the movement and delivery of information is now done at this layer and it's not integrated with the traditional hardware-based network.*
See what I mean in this post titled "The Network Is the Computer...(Is the Network, Is the Computer...)"
This is exactly why you see Cisco's investment in bringing technologies such as VN-Link and the Nexus-1000v virtual switch to virtualized environments; it homogenizes "the network." It claws back the access layer so they can allow the network teams to manage the network again (and "automate" it) while also getting their hooks deeper into the virtualization layer itself.
And that's where this gets interesting to me because in order to truly automate virtualized and cloud computing environments, this means one of three things as it relates to where core/critical infrastructure services live:
- They will continue to be separate as stand-alone applications/appliances or bundled atop an OS
- They become absorbed by "the (traditional) network" and extend into the virtualization layer
- They get delivered as part of the virtualization layer
So if you're like most folks and run Microsoft-based "core network services" for things (at least internally) like DNS, DHCP, etc., what does this mean to you? Well, either you continue as-is via option #1, you transition to integrated services in "the network" via option #2 or you end up with option #3 by the very virtue that you'll upgrade to Windows Server 2008 and Hyper-V anyway.
SO, this means that the level of integration between, say, Cisco and Microsoft will have to become as strong as it is with VMware in order to support the integration of these services as a "network" function, else they'll continue -- in those environments at least -- as being a "bandwidth bucket" that provides an environment that isn't really automated.
In order to hit the sweet spot here, Cisco (and other network providers) need to then start offering core network services as part of "the network." This means wrestling it away from the integrated OS solutions or simply buying their way in by acquiring and then integrating these services ($10 Cisco buys Infoblox...)
We also see emerging vendors such as Arista Networks who are entering the grid/utility/cloud computing network market with high density, high-throughput, lower cost "cloud networking" switches that are more about (at least initially) bandwidth bucketing and high-speed interconnects rather than integrated and virtualized core services. We'll see how the extensibility of Arista's EOS affects this strategy in the long term.
There *is* another option and that's where third party automation, provisioning, and governance suites come in that hope to tame this integration wild west by knitting together this patchwork of solutions.
What's old is new again.
/Hoff
*It should be noted, however, that not all things can or should be
virtualized, so physical non-virtualized components pose another
interesting challenge because automating 99% of a complex process isn't
a win if the last 1% is a gating function that requires human
interaction...you haven't solved the problem, you've just made it less
steps that still requires Bob at the helpdesk..
Recent Comments