This entry is part of a series of postings that consider the challenges of deploying enterprise software into the cloud. I added this topic to the list because it does not really fit into one of my original categories. The topic relates to where you might deploy each part of a stack from end user to core servers and how that might affect the performance of your systems.
One concern that people often bring up is that moving any part of an enterprise deployment model into the cloud will naturally cause degradation in performance because of increased latency. In reality this is not always the case. Read on...
The Non-cloud Model
Think about the deployment of an enterprise system with a thin (web-based) client to a user sat in the office in the same building as the data center.
In the conventional (non-virtualized/non-cloud) deployment model the core system server and the web application server will be positioned very close to each other. Often they will be physically in the same data center with a fat umbilical cord of fiber linking them together. So very little latency there and no real bandwidth issues either. The end user connects to the local application server from her office and it serves up the client application into her browser.
The Perceived Cloud Model
The concern comes from taking the core server and web application server out to “the cloud”. Now those servers are in some undisclosed remote location. Now consider what happens when the user navigates around the repository and finds a nice fat 120MB PowerPoint presentation to download. That file needs to get from the web application server to the user’s desktop over the ether. Now she has latency and bandwidth challenges up the wazoo (technical term). The 120MB presentation is going to be downloaded lock, stock and barrel to the user's local desktop. Once it has all made it down (no byte streaming a PPT) she can open it.
It is the addition of this WAN layer into the architecture that concerns users of enterprise systems.
The Reality of the Cloud Model
Take the same business requirement: the user needs to work on her 120MB PowerPoint presentation which is on a server in the cloud. But now add something else that the wonderful world of virtualization gives us - virtualized desktops.
With a virtualized desktop the user uses a client application to connect to the instance of a desktop that is actually running on a server in the data center. The user sees that desktop and can interact with it from her machine but the OS and applications are actually running on the machine in the data center. In this image the user is actually using an iPad but the virtual (remote) desktop is running Windows.
All the user gets on their local machine is a "screen painting" of what is happening on the virtual desktop. So when she opens a 120MB PowerPoint file now it is actually transferred to the virtual desktop (which is running on a server in the data center) and opened there. The user sees the PowerPoint file open in seconds and can edit it, can email it...she can do everything that she would normally without the file ever moving across the ether to their machine.
Summary
So, why don’t we just virtualize all desktops? In the world of “choice computing” this might be a reality sooner than you think but don't get me wrong, there’s a litany of pros and cons to this approach. As well as performance there are other positive implications in security, desktop management, virus control, etc. However it might not work for everyone. I'm sitting on a United flight to Australia right now and there's no WiFi on the flight so I can't access a virtual desktop at all, also if you need very high fidelity or need to interact with files on your local machine then today’s virtual desktops might not work for you.
The balance of virtual desktops vs. local thick clients vs. conventional thin clients is going to be a balancing act for a while but for sure the virtualization of the client can make the cloud deployment model a reality for applications where it just may not have made sense in the past.
Virtualized desktop has existed for years as the Remote Desktop feature of Windows (using a client as access to a server) or in many web teleconferencing systems and so performance weaknesses are well known. As per your example, while access to the first slide of the PPT is quicker than downloading everything, sending the slide as screen pixels requires much more bandwidth. Admittedly there are compression tricks that help (especially with boilerplate slides) but at a fairly small fraction of actually viewing the PPT the download will require less bandwidth than paging through all those pixels. This is the visual side; the stuttering of mouse response and mishits is yet another irritation. And if the server running the virtual desktop is just a tad slow, even more unresponsive to user. The trouble is, most applications today, are not the old "batchy" type but highly interactive and even small delays frustrate that interaction.
Posted by: douq millar | 03/15/2012 at 01:15 PM
Douq,
I think that it is a balancing act as usual; the bandwidth needed to download content to a local client vs. "screen scraping". The VDI technology seems to have improved and many of the screen resolution/performance issues seem to have been lessened greatly. I think that the key is that VDI allows you to use non-cloud –optimized clients without making a huge reengineering investent. Imagine some of the chatty thick clients that we have today running over the WAN vs. VDI.
Posted by: Andrew Chapman | 03/19/2012 at 09:24 AM