Back in 2011, in the world of SAN, I wrote an article called “VDI USER EXPERIENCE and USER ACCEPTANCE”. The article iterated through my experiences reviving failed VDI deployments due to poor technical design; mostly driven by storage performance complications.
In my article I explain that the User Experience is the interaction between users and the desktop interface, in this case Windows and the applications. User Experience can only be measured by the user perception in contrast to a known baseline. User Experience is not a metric that can be collected and analyzed.
Metrics such as CPU utilization, storage latency and network contention do not provide a user experience index. As an example, network bandwidth utilization could be low but display performance is sub-optimal due to a mis-configured display protocol. There are tools that provide in-guest analytics, but offer limited view to the wider user environment spectrum – but I still say it’s good to have them.
I then explain User Acceptance, being a consequence of a good User Experience. If the User Experience isn’t great there will be no User Acceptance, and here I find the most dramatic consequences of a not well-designed solution. If users, in their very first interaction with the new VDI environment are somewhat disappointed, it will cost time, effort and sometimes money to re-convert them; and just like in any market, if users are happy they spread the word, and if they are unhappy they will also spread the word.
yada, yada, yada… then I talk about the importance of never overlooking the VDI pilot and Proof of Concept (POC) phases. All good so far, and we all agree that POC and Pilot are essential for successful VDI rollouts.
What has changed?
Since 2011 hyper-convergence has emerged and matured, removing many of the complex aspects required to rollout successful VDI solutions.
The inexistence of dual-controllers like in centralized storage (SAN) completely removed the need to validate storage architectures under full load. The distributed nature of hyper-converged solutions warrants testing a single server under load to be enough to characterize an entire VDI deployment, independent if 100 or 100K users. You will know that for a given user profile you are able to have n users per server.
That also means that there aren’t calculations required for the number and types of disks, RAID groups, LUNS or controller overload in case of failures. Hyper-convergence implements an independent virtual controller per server; and in case of a server component failure only the desktops in that server are affected, not the entire fleet.
Finally, hyper-convergence excels when VDI deployments need to grow, not requiring complex math exercises to calculate storage performance and capacity expansions. Remember, you already know how many users of a given profile the server will support; simply add servers to the existing cluster.
Back to the POC and Pilot phases, they are still essential and you should never skip them. However, they are now much easier to be attained with a little help of hyper-convergence.
Last but not least. Make sure you select the VDI solution and supporting technologies that cater for your users need; make sure products will have longevity and have right fit for your organization. VDI can be very complex and hyper-convergence make it simpler, but look for vendors that are willing to help you with the process to augment your chances of being successful.
This article was first published by Andre Leibovici (@andreleibovici) at myvirtualcloud.net