An article that recently sparked my interest was authored by Harry Labana on Brian Madden’s blog entitled “Searching for a new way to enable non-persistent VDI. Can we leapfrog traditional PC management?”. I was suddenly taken back to early 2011, when I wrote an article entitled “Floating Pools are the way to go….”.
Labana, in his article, tries to find a better way to achieve non-persistent VDI for a broader user base that leapfrogs current physical desktop management practices.
I think it’s great that we have multiple and distinctive ways do to image and application management. However, in my opinion this is very 2011. The truth is that this whole image management thing has been leapfrogged by other technologies.
As times and technology have changed, I think it’s appropriate to revise the approach to the challenge of enabling non-persistent desktops. Following is my high level viewpoint on this challenge in the current environment.
Everything started with Full Clones – In the past everything was Full Clone. The more fortunate VDI early adopters were able to host desktops in enterprise SAN/NAS in the same way that many vSphere customers had been hosting server workloads. The SAN approach provided more performance and capacity than running VDI on direct attached storage. The setback with Full Clones is related to the cost of traditional storage architectures to deliver on capacity and performance required by VDI workloads.
Next came Linked Clones – Due to high storage cost adoption barrier the industry started looking at different ways to reduce capacity related storage costs and simplify image management. Linked Clones allowed organizations to define a single master image that would serve all virtual desktops along with delta individual files that grow overtime while storing guest OS write operations. After a defined period of time, which could be anything between hours or months, the desktop was refreshed to its pristine state recovering the storage capacity used by the delta files.
Because Linked Clones were supposed to be refreshed constantly, either resetting to its pristine state or replacing the base image, user profile management became even more critical for administrators implementing VDI.
At the same time the industry started talking about ways to maintain user-installed applications during these refresh operation cycles.
Then came Advanced Tiering – Clearly Linked Clones solved much of the capacity issue, but performance was still a high barrier adoption cost for VDI. The more performance was required the more disk spindles and storage cache was required, making traditional storage solutions very expensive.
With the rise of SSD and Flash technology came the Advanced Tiering. Advanced Tiering allowed administrators to provide a single small tier of SSD to support Linked Clone parent VMs, providing much improved performance with lower latency.
At that point in time, managing Linked Clones with non-persistent assignments and image management made a lot of sense. However, this is not 2011 anymore and things have drastically evolved allowing administrator to take different approaches.
Storage technologies have disentangled in the last couple years and now we have available, at very reasonable costs, a range of flash-based arrays, with data de-duplication, offline de-duplication and data compression. There are also in-memory solutions that offer similar benefits.
My claim in this article is that non-persistent desktops, Linked Clones and alternative image and app management solutions are not a requirement any longer for the large majority of use cases since now we can utilize fully provisioned persistent clones without the capacity and performance penalties. Look at Pure Storage, XtremeIO and Atlantis.
In addition to the above, administrators can now use the same tools they are familiar with to manage Windows environment without incurring in additional learning and licensing costs, or instead of managing VDI in an entirely new and different way than physical devices.
If for any reason the use case translates into desktops being constantly refreshed, this can also be achieved while using fully provisioned clones. Additionally, despite what many think, fully provisioned desktops doesn’t translate into a full size desktop after it’s initiation. By default, all full clones are thin provisioned and will grow overtime, much like linked clones.
Now that we have the capacity and performance very well handled with Full Clones, why not utilize the same application delivery mechanisms you use for your physical desktops to deploy applications to users on their fully provisioned persistent desktops? Given that everything runs inside the datacenter, there are no networking constraints to get applications deployed in a quick a safe manner.
One of the only use-cases I can see for non-persistent desktops using layering or app virtualization is when the desktop must be destroyed after use and the applications for each individual varies in such a fashion that does not justify creating a master image containing those applications.
You have to ask yourself – why do you need to obliterate the user’s desktop after use? Perhaps you have a secure solution or a government standard you must follow.
In his article Labana address some of the key challenges with current technology options. I am offering my conclusions along with these key challenges.
- Cost and performance of storage.
- [My view] Storage technologies have evolved over the last couple years drastically cutting down costs associated to capacity and performance, effectively enabling organizations to deploy fully provisioned persistent desktops.
- Management overhead of 200,000 local disks for users. (If this is used as an approach to address IOPS).
- [My view] No need for direct-attached storage since newer storage technologies address the performance issues with Flash or RAM, and from a capacity perspective also leverages block de-duplication.
- Compatibility of App-V with all apps and the packaging process costs.
- [My view] Fully provisioned persistent clones do not require namespace or application virtualization solutions to deliver apps. Applications are deployed using traditional tools and storage de-duplication technologies take care or of capacity and growth issues.
- Cost of replicating all 5000 apps globally when only 20 percent are common.
- [My view] I am not entirely sure what that means, but all the applications will reside inside the datacenter, therefore no need to distribute applications to secondary datacenters and branch offices.
- Building a scalable distribution infrastructure for layering solutions, while also expecting runtime provisioning of applications
- [My view] No namespace or application virtualization required, removing management complexities and application compatibility issues.
- Want to avoid UEM personalization and policy configuration complexity and do not want to build and manage a distributed SQL server infrastructure.
- Sure, we don’t want that either.
I am not saying Labana is wrong, because I actually think he is dead right, for those cases where non-persistent desktops are used. My point is that in this new and modern world where all this amazing new storage technologies are helping us to simplify deployment and management there is no need to try to create complex science experiments to solve something that is already mastered and solved via a different disciplines; in this case storage.
I think it’s great that we now have such a range of technologies available to slice and dice VDI and Windows management, but ultimately I personally think they are not required for the large majority of use cases. For use cases where there is a fit, Unidesk, Horizon Mirage and other players on the market will be able to help you.
The only piece that is not addressed in any of the proposed solutions is Disaster Recovery. Certainly, being able to dynamically pull user applications or layers into a non-persistent image can be very useful in certain scenarios, especially when disaster recovering a datacenter into a remote location. For this end I would recommend looking at Horizon Mirage, enabling you to store de-duped copies of the desktops in a remote datacenter for easy restore, but there are other technologies that execute similar operations.
The idea that non-persistent desktops with layering and app delivery mechanisms reduce OpEx and CapEx costs of desktops can be an illusion. If you add license and training costs, not to mention complexity, you quickly realize that fully provisioned persistent desktops make sense. The only drawback with the majority of these new storage technologies is that it will only start making CapEx sense from a certain number users onwards. However, few of these vendors already started adding lease or rental-models that will fit the bill for most deployments.
Alternatively, for small deployments that may grow overtime hyper-converged scale-out solutions like Nutanix and VMware Distributed Storage can help to relieve performance bottlenecks making use of SSD for local caching. To my knowledge these solutions do not offer data de-duplication at this point in time, but they have a secondary magnetic disk tier for capacity.
Remember that I am discussing VDI pool deployment models along with application and image management, not storage technologies. It is unimportant to me the storage technology in use, as long as it is providing the capacity and performance required for the VDI workload.
Finally, I would like to address Harry Labana. I very much admire the work you have done for End User Computing, especially at Citrix and Appsense. I just happen to think we have a better and more simplistic ways to deliver VDI to end-users using technologies available to us today.
Disclaimer: Any views or opinions expressed in this article are my own, not my employer’s. The content published here is not read, reviewed, or approved by VMware and does not necessarily represent or reflect the views or opinions of VMware or any of its divisions, subsidiaries, or business partners.
This article was first published by Andre Leibovici (@andreleibovici) at myvirtualcloud.net.