«

»

Expensive storage array not required for VDI?

Lately I have been questioning myself whether VDI really need expensive enterprise storage arrays (SAN) to run VDI environments. The points I raise here may not apply to all organizations but certainly there are aspects to be observed.

Assuming User Profiles are hosted in a file server, why couldn’t virtual desktops run from the server local storage?

Perhaps the quick answer is the lack of high availability and live migration. One of the principles of VDI is that virtual desktops should be disposable and non-persistent (ok, in some use cases). Also, in a cluster, if you lose a host there would always be another host to take over the workload and a new virtual desktops can be quickly provisioned if necessary.

What about power users that need special applications?

Well, with use of application virtualization tools available on the market, applications could be associated to each user during logon time, and based on group policy objects and AD authentication.

If CPU and Memory are not a problem nowadays, certainly IO is. For the last 10 years the maximum operations/s achievable with fast spinning drives at 15K RPM was about 175 IOPS. When combined into a RAID1 this number goes down to approximately 85 IOPS. (Here is a good article from Steve Chamber discussing IOPS). Today with SSD (Solid State Drives) it is possible to push these numbers up to 6000 IOPS (Uncertain about RAID1).

Real-life virtual desktops consume anywhere between 10 and 25 IOPS depending on usage patterns. (Again, here’s a good article from Chad Sakac about VDI workloads). Averaging this number to 18 IOPS would allow a server to host up to 333 virtual desktops. Off course, there is no hardware kit today that would support this number of virtual desktops.

So, in summary, today IOps is not a problem and with Solid State Drive prices plummeting to as low as $2000 for 500GB it is easy to consider not using expensive enterprise corsair-p256-ssd-drivestorage arrays for VDI solutions. Yet, if the VDI solution is combined with Linked Cloning technology is it possible to reduce the drive costs to as low as $500 since less storage is required.

In some extreme circumstances is it even possible to consider a single drive per host server.

In real-life VDI scenarios it is possible to run up to around 120 virtual desktops in a single host. This number would only produce an average of 2160 IOPS, easy for a SSD drive.

Sometime ago, I deployed a VDI solution for a call-center where the virtual desktops were all the same (non-persistent) and some applications would only appear to certain users. With dual-quad core servers I was able to host approximately 60 virtual desktops per host running the VMs from a local storage using 15K RPM drives. This was a very specific circumstance where call-centre agents don’t do anything other than keeping their web applications open all the time, so not much IO. However, it shows us that it is possible to NOT think about shared storage.

So, Does VDI really required shared Enterprise Array Storage?

10 comments

4 pings

Skip to comment form

  1. Marcel Göertz

    But….you do know that in RAID configs, SSD’s do not support the TRIM command, and without TRIMming, performance will degrade pretty quickly….?

  2. Andre Leibovici

    Thanks for your comments Marcel,

    It looks like the TRIM issue has been sorted out a while ago. Have a look at http://en.wikipedia.org/wiki/TRIM

    (The purpose of the instruction is to maintain the speed of the SSD throughout its lifespan, avoiding the slowdown that early models encountered once all of the cells had been written to once.)

    Also, today’s SSD drivers support up to 10k write operations per cell. That should be enough to keep VDI going for a while before having to replace the drive.

    One thing is certain…. one day the drive will have to be replaced when cell are exhausted.

  3. Daniel Feller (Twitter @Djfeller)

    I’ve been having similar conversations. Based on the virtual desktop activity, I’ve seen IOPS be as low as 3 per desktop and as high as 26 (bootup). If virtual desktops are disposable, then most people think there is no need for the expensive enterprise storage. Just use local storage on your hypervisor of choice (XenServer, Hyper-V or vSphere).

    Unfortunately, I’ve seen too many people use local storage without calculating IOPS needed. If users are active and the physical server is RAID 1, those local disks should only be able to support about 20 users. And all hypervisors can do better than that with 8 cores.

    As always, people need to do their due diligence.

  4. Andre Leibovici

    Daniel, thanks for your thoughts

    I suppose you are assuming drives spinning at 15K RPM. According to article published by Steve Chambers these are the IOPS for each drive type.

    SSD – 6000 IOPS
    15K – 175 IOPS
    10K – 125 IOPS

    It is best-practice to understand and evaluate usage patterns for each type of user before deploying VDI on local storage or anywhere.

    I am starting to see no reason to use SAN type storage for the majority of VDI deployments. Is this too harsh to say?

  5. Daniel Feller (Twitter @Djfeller)

    I think that is a little too harsh. I still don’t see many going down the SSD route. For many it is too new and will take awhile before they move down this path. Numbers don’t lie though, in that SSDs are fast and will help offload storage requirements from SAN.

  6. Andre Leibovici

    Daniel,

    OK, maybe you are right. But I certainly see a number of cases where VDI with local disks is applicable.

    Any other thoughts?

  7. Andre Leibovici

    Good article on the same subject from @djfeller but focusing Citrix XenDesktop. I just don’t quite agree with the number of IOPS specified for Working time as I have seen loads as high as 20 IOPS only based on application usage.

    * Bootup: 26 IOPS
    * Logon: 12.5 IOPS
    * Working: 3.9 IOPS
    * Logoff: 10.7 IOPS

    Read the article at http://community.citrix.com/display/ocb/2010/01/13/Deciding+on+Local+or+Shared+Storage+for+your+Desktop+Virtualization+Solution

  8. Kristoffer Sheather

    You can always go the route of building your own SAN combining SSD’s + SATA or SAS disks, this works out to be a great compromise between cost, IOPS, energy usage, and storage capacity. This way you still have mobility of your VDI fleet over your physical server farm to mitigate against a host level failure.

    Cheers,
    Kris

  9. emplois Cameroun

    thanx for this post about vdi dude! it’sreally usefull!

  10. Paul Wilson

    I spent almost 8 months working for Citrix doing scalability testing for XenDesktop. During this time I gathered and analyzed a significant amount of data. If you would like to learn how I estimate VDI IOPS check out my blog at http://virtualizationjedi.com/2010/10/31/finding-a-better-way-to-estimate-iops-for-vdi/

  1. My first post – VDI storage « Virtualizing the D.C.

    […] like this, will be in response to things that I have heard or read.  In this case, this post on myvirtualcloud.net got me to thinking.  Quite a few people are asking about using local storage for VDI […]

  2. VIRTUMANIA Episode 3: High Availability For Virtual Machines | VM /ETC

    […] Expensive storage array not required for VDI? […]

  3. Virtualization Short Take #37 - blog.scottlowe.org - The weblog of an IT pro specializing in virtualization, storage, and servers

    […] VDI on local disks, anyone? It’s an interesting discussion point that has its pros and cons. I guess the value of this sort of design really depends upon the business objectives the VDI implementation is trying to fulfill. […]

  4. Virtualization Short Take #37 | Free Techie Blog

    […] VDI on local disks, anyone? It’s an interesting discussion point that has its pros and cons. I guess the value of this sort of design really depends upon the business objectives the VDI implementation is trying to fulfill. […]

Leave a Reply