Back to Virtualization and Cloud Index
It seems like vendors are intentionally blurring the lines between virtualized environments (on/off-premise), managed services, hosting and anything else that vaguely resembles a vaporous environment. I’m not surprised; no one argues that Cloud Computing doesn’t bring a huge amount of value to the enterprise…but “defining what it is” is confusing, very confusing. So, I set out to try to build a relationship between all the different deployment topologies and then find out what the general consensus is for the name of each node.
In the diagram below the blue boxes describe that element of the topology and the red/green boxes describe what the end result is typically called.
Your job? As you hear news (like Apple’s rumored move to Cloud streaming of video) or hear about a cloud service see where it fits into the diagram. If it doesn’t then work out where the tree splits and send me the new nodes to publish. If you do the work, I’ll take the credit…OK?
My Conclusion? I think that really only the green nodes are real Cloud Computing because I want to believe that the future of the Cloud is about the elasticity not just the virtualization. I want my processes to consume zero resources when I’m not using them but to have access to massive amounts of computer power on demand.
Note: This is an attempt at aggregating other people’s definitions into a single structure not necessarily ‘my opinion’.
[Click diagram to view a version you can zoom in on.]
[Updated 15-Apr-2011: Got a feedback on the Private cloud. Added it to the off-premise nodes as the formal definition is a "cloud infrastructure that is operated solely for a single organization" which may be managed by the organization or a third party and may exist on premise or off premise.]
Virtualization (cloud) is often compared to the electrical power grid, but there are important differences.
With dedicated resources, providers knows their costs, customers know their SLA – both are essentially fixed over short-term.
The whole point of virtualized hosting is load sharing, not every customer is using their maximum resources at the same time, so provider can build out to some “average” which presumably costs less than dedicated. The provider spends less money for same # of customers, so can charge less (sales advantage) and get higher margins: win-win, who wouldn’t want that?
Now given virtualization does carry some premium in cost for the same delivered resource + provider has to have some headroom for customers peaking their loads at same time, provider’s savings may not be as much as purely the load averaging, esp. if the provider must define some SLA to customers. So operators cost savings may not add up to that significant a difference as it first seems and customers may see degraded service.
But here’s where the analogy to electrical grid breaks down. Electricity production economics is based on both capex and opex (fuel, labor, maintenance, etc) where opex is very high. In the summer afternoons in California when business still has lights and AC going full blast, but workers head home and turn on their lights, appliances, AC – loads peak. What do utilities do? Brownout, not very popular. Standby capacity that can be switched on quickly, better. Standby has same capex as baseline (actually usually more), but much lower (average) opex so meeting peak demand is not so hard for electrical grids (as long as Enron is not involved).
But any concept of “standby” in computing is vague since costs are almost entirely capex. What savings are there to “turn off” some vBlock? Not much. And, AFAIK, grid load sharing across service providers doesn’t exist so a provider can’t do want California does, buy electricity from Nebraska.
So inevitably there is a collision between SLA and costs. In some time periods (month-end?) load will rise for all customers. And since computing often has the nasty characteristics that pushing beyond some optimal load actually decreases performance (due to congestion itself imposing a load) Brownout time. So as a service provider where do I set my peak capacity? And then what are my cost savings to offer better prices and get me better margins?
So I think the airlines are a better model. They overbook, hoping not everyone shows up. Then they either bribe some customers not to fly or they delay everyone (brownout). Airlines don’t usually have spare planes and crews they can turn on at peak. So does service provider offer some second-class service where customer gets bumped when loads get too high, maybe. As customer can I live with that to get a little cost savings.
So the power utilities aren’t perfect, but frankly they do better load management than the airlines. Which model will virtualized server providing end up following?
Posted by: Douq Millar | 04/22/2011 at 12:19 PM
I want to trademark “Peak++” because IMHO this is the win, not cost-savings, of virtualization for service providers.
Instead of overbooking (airlines) or balanced-booking (most business) of resources, providers deliberately underbook, thus having ample reserve capacity to offer Peak++, something that is difficult to do with dedicated hosting. Economy customers get brownouts, first-class gets all-they-can-eat.
Roughly the same average cost as dedicated but now a premium service offering as the provider’s competitive advantage.
Today most providers are in this mode anyway, since they’ve built out anticipating demand and haven’t yet hit steady-state (and probably lose money betting on future growth). But five, ten years from now this startup headroom will be gone and steady-state will apply. So what does virtual service look like a decade from now?
Posted by: Douq Millar | 04/22/2011 at 12:35 PM
This post shows the distinction between some of the popular hosting topologies and elucidate their practical application while drawing a comparison....I appreciated the way have distinguished the hosting services.
Posted by: managed it support services | 01/21/2013 at 06:33 AM