Back in 2011, in the world of SAN, I wrote an article called “VDI USER EXPERIENCE and USER ACCEPTANCE”. The article iterated through my experiences with many failed VDI deployments due to poor technical designs; mostly driven by storage performance complications.
Since 2011, hyperconvergence has emerged and matured, removing many of the complex aspects required to rollout successful VDI solutions. The inexistence of dual-controllers like in centralized storage arrays (SAN) removed the need to extensively validate storage architectures under full load. This new architecture also removed the need for complex calculations required to properly size the number and types of disks, RAID groups, LUNS and controllers overload due to failure scenarios.
Hyperconvergence excels with VDI deployments that need to continuously scale, not requiring complex exercises to calculate performance and expansions.
At this point in time anyone that chooses anything but hyperconvergence to rollout VDI with more than ~200 virtual desktops is just plain crazy, or is being misguided by legacy storage vendors.
My colleagues Briar Suhr and Sachin Chheda have created a fantastic Free resource book for those deploying or planning to deploy VDI solutions. The book covers architecture principles, such as scalability, performance, capacity, monitoring and the creation of building blocks. The authors provide critical insight on the differences between Converged and Hyperconverged architectures, All-Flash vs. Hybrid, and yet provide detailed information on how to correctly size compute and clusters. Finally, they also cover TCO and ROI of Converged and hyper converged solutions.
It’s a must read!! Just click here!
This article was first published by Andre Leibovici (@andreleibovici) at myvirtualcloud.net