«

»

Jun 25 2012

A Guide of How to Buy the Wrong Hardware for your VDI

I have seen this some innumerous times – servers being purchased before the solution is properly architected and sized. I know it seems silly, but it happens more than you think. I decided to demonstrate how bad things can get with a simple example.

Not so long ago I helped an organization to size their VMware View production environment. They started the talks mentioning that servers had already been purchased and should arrive soon. Here is the unit configuration:

 

HP ProLiant BL685c G7 Server Blade
AMD Opteron™ 4 processors/24 cores/24 threads/2.6GHz
192GB RAM

 

At first glance it seems fantastic. – What could go wrong with HP blades? Well, the problem is not the blade nor the vendor, but the specs this organization has chosen for their VDI solution.

 

After evaluating the pre-existing (Pilot) VDI this organization decided on the following desktop specs:

5000 desktops
2 vCPU
200MHz AVG vCPU utilization
2 GB RAM

 

The natural instinct would be to calculate how many desktops we are able to run simultaneously using all those 24 cores available from each AMD Opteron™. Well, as it turns out the amount of memory in each server only allow for a maximum of 1 (one) desktop per CPU core, totalizing only 192 96 VMs per host.  That’s a very low consolidation ratio and doesn’t make sense from a cost perspective. They would need 53 hosts to build their vSphere cluster.

 

Screen Shot 2012-06-24 at 2.59.23 PM

 

If RAM was not the bottleneck we could perhaps host 1056 desktops using 11 virtual desktops per core, at a total of 4.8GHz. For that we would need 1.6TB RAM. It’s a No Go here; vSphere 5.0 supports only 500 VMs per host, and the CPU clock available is 2.6GHz only. (picture below)

 

Screen Shot 2012-06-24 at 3.04.12 PM

 

The first thing you should do when sizing hosts for VDI is to have your mind set on the limits.  In our case it’s 500 VMs per host and a maximum of 2.6GHz total CPU clock. If you follow the limits using the VM specifications defined by this organization, you will find the following:

 

Screen Shot 2012-06-24 at 3.08.14 PM

Now we are within limit ranges for GHz per core and the amount of VM per host. However, we need 771GB, but only 192GB is available on those shiny new servers. The RAM upgrade is a possibility but to get the servers with 1TB RAM will make the solution extremely expensive. It’s even more expensive than buying more smaller servers with less memory footprint.

This organization is between a rock and a hard place.

No matter what they do their CPU power will be underutilized due to the lack of memory resources or the limits imposed by the hypervisor. I have dug into this kind of problem before in my article The Right Hardware for a 10K VDI solution.

I asked myself what I would have chosen for this scenario. Assuming I am not allowed to change the specifications for the virtual desktops I would probably try to bump the CPU clock up to 3.0GHz when buying the servers to allow me to run more VMs per CPU core. With more VMs per CPU core I would be better utilizing the amount of CPU available, however I would still need 719GB RAM.

 

Screen Shot 2012-06-24 at 3.57.43 PM

 

This is a complex scenario because of the resources required by each virtual desktop, and there is no exact answer because hardware vendors may price newer technology such as the AMD Opteron™ with 24 cores cheaper than it’s previous edition. Most vendors, with very few exceptions, have no skilled pre-sales to properly size hardware based on your real requirements. Some of them  just want to push the sale trought at no cost – specially if it’s end of financial quarter.

Another possible option would be to add the new blades to the existing server cluster, ignoring the recommendation to isolate desktop workload from server workload. When mixing both workloads vSphere DRS should quick-in and accommodate the workload. However, you should not do this unless you feel very confortable reading and analyzing all the metrics (CPU, Storage, RAM etc.) in your  cluster to make sure everything is under control. Server and Desktop workloads are inherently different and make cause adverse bottlenecks to each other.

My recommendation here is simple. If you are not entirely confident that you can successfully size your VDI server infrastructure, hire a consultant. They have the required field experience to properly size your solution and save you money.

Finally, this same scenario could be applied to other components in a solution, such as network and storage subsystems.

 

All calculations in this article have been done with the VMware View VDI Flash Calculator. The VDI calculator can be found at http://myvirtualcloud.net/?page_id=1076.

 

This article was first published by Andre Leibovici (@andreleibovici) at myvirtualcloud.net.

 

4 comments

Skip to comment form

  1. Jeff O'Connor

    Hallelujah Andre !

    I see this in just about every VDI deployment, and sometimes they get lucky with their hardware choice. But many get it wrong, and this becomes another cost VDI has to overcome.

    Thanks for raising the issue 🙂 Hopefully some people will listen!

  2. Andre Leibovici

    @Jeff O’Connor
    Yes, it is still a common pain point.
    BTW – hope you are coin well in Oz.

    Andre

  3. Toby Armfield

    Andre,

    I think you have a mistake in the text – you state “Well, as it turns out the amount of memory in each server only allow for a maximum of 1 (one) desktop per CPU core, totalizing only 192 VMs per host” but the cores are 24 x 4 = 96 and the calculator seems to show 96VM’s per blade not 192? Also your calculator appears to show 96 VM’s per host not 192?

    As you say it is RAM that is limiting the amount of VM’s per blade anyway – with 192GB available and a 2GB VM reuqirement you would be restricted to 96 x 2GB with no overcommit.

    I wuld be interested to get your thoughts on the Maximum VDI VM’s you shoud put on a blade anyway, talking to customers who have concerns abot failure domains and are looking at 90 VM’s per blade being too high due to the risk of blade failure?

    Good to get your thoughts / views?

    Toby

  4. Andre Leibovici

    @Toby Armfield
    Thanks for catching the mistake. I have correct in the article.

    With 1 VM per Core there would be 96 VMs per host and the host would require 155GB RAM assuming 30% TPS.

    Andre

Leave a Reply