Should I use DRAM as storage for VDI deployments?

This is an interesting topic that I came across few times but more recently is finding its way into organizations due to lower SLC SSD acquisition cost. Let’s put the discussion in context…

Architects should embrace different deployment approaches that properly fit the intended use case. That’s a foundational design principle – use case always should drive the deployment model. The two most used pool management models for VDI are as follows.

Persistent Pools are assigned when the user first logs into the virtual desktop, and subsequently the user always connect to the same virtual desktop allowing them to personalize Windows look and feel, install their own applications, and have constant access to the data and documents created in the virtual desktop.

Floating pools, also known as non-persistent, are assigned when the user first logs into a virtual desktop, and every time the user re-connects he/she may or may not get the same virtual desktop. It’s also a common approach to refresh or delete the virtual desktop after use; ensuring users always get a clean and functional machine. Because virtual desktops are refreshed regularly users will lose documents saved to the desktop.

In this case Roaming Profiles and 3rd party profile management tools will safeguard Windows look and feel, registry entries and application configuration settings. Folder Redirection will forward My Documents, My Videos and other user data files to a specific path on the network.

Floating pools may be deployed onto shared or local storage; however another exciting deployment option is DRAM. When comparing price and performance it might make more sense these days to add DRAM to hosts and use it for storage because SLC SSD currently costs more than DRAM.

According to a recent published research – “Although SLC NAND flash costs less to manufacture than DRAM, market forces have pushed DRAM prices down while pushing SLC NAND flash prices up.

Driven by my curiosity I decided to use my VDI Flash Calculator to size a server to use DRAM only, for both virtual machine virtual memory and for storing VMDK and other vSphere files – boot should happen from internal SD Card or USB, not requiring physical HDD.


The requirements for the configuration above are as follows:



Dell-R910_2The next step was to find a server that supports 1TB of DRAM (202 + 770) and with the correct amount of GHz to support all 130 virtual desktops.

Please note that I am taking in consideration the number of cores, desktops per core, amount of DRAM for virtual memory, amount of DRAM for storage, virtual desktop overheads etc.

While shopping at Dell I came across a PowerEdge R910 Rack Server that would suit my needs at US$35,124.00 price point. The price for this server drastically increased because the requirement of 4 sockets to address the terabyte of DRAM.

The next step was to a find a server with less memory (202GB) that would support a SSD NAND device such as Fusion IO ioDrive Duo or a Virident FlashMAX II with 700+ GB.  The Fusion IO ioDrive Duo with 640GB has an average price of US$7,000.00.

r520For this host I selected the PowerEdge R520 Rack Server. Perhaps it is possible to find a cheaper servers, but I wanted to make sure that the server would support a PCIe card. The final price for this host with a 640GB Fusion IO would be approximately U$18,000.00. This server only required 2 sockets to address 256GB DRAM while still providing enough GHz to support 130 virtual desktops.

Server configuration, pricing and sizing used in this article are just an approximation of what a real-life deployment would look like. However, in this scenario the price difference was enormous, making the DRAM deployment not an option from a pricing perspective. From a performance standpoint DRAM would deliver better performance, more bandwidth and lower latency.

I personally don’t think the price/performance at this point in time is favorable to DRAM for the majority of use cases. Perhaps VDI solutions with intensive disk IO operations such as image rendering could be a viable use case.

There are solutions on the market that make the use of storage DRAM more affordable due to data de-duplication techniques, requiring less DRAM for the same amount of desktops. A good example here is the ILIO appliance from Atlantis Computing. Note that in my calculations I have not included the benefits of using such technology nor their licensing costs. I am confident there are other solutions that would seat in the IO path and provide block de-duplication helping driving down the storage capacity required.

Reduce your storage footprint / Maximize your storage utilization

Increasing VM memory reservation is one of the easiest ways to reduce storage footprint. As an example, a VM with 2GB RAM will often utilize 2GB of disk space for the VM swap file. Setting the Parent VM with a 50% memory reservation will make the storage footprint drop to 1GB for every Linked Clone desktop, reducing the overall storage footprint. This little teak could help you save 130GB of storage in a VDI solution with 130 virtual desktops per host.

Another powerful options are Windows profiling and customization. I recommend reading Mastering VDI Templates updated for Windows7 and PCoIP and VDI Base Image: The Missing Step for additional information on how to profile your guest OS.

This article was first published by Andre Leibovici (@andreleibovici) at


Skip to comment form

    • Tim on 01/01/2013 at 8:26 am

    Just came across your post and I’m curious, what kind of work would you expect someone using a single vCPU and 2GB of RAM desktop to be performing? My first deployment of virtual desktops for staff use was not received very well. Many reports of slowness, which although I should know better at this point, I still blame on the user.

    Even so, I know from personal experience that doing a lot of web-based work, whether it’s using various sites for research or other applications, having a single CPU core and only 2GB of ram is going to be slow. I’ll check in with other users and see them using Chrome or Firefox with dozens of tabs open.

    After several iterations, I’m now up to 2 vCPUs and 3GB of RAM, still on a 32-bit Windows 7 image. Sometime this year, I’m sure I’ll be bumping that up to 4 and 4 on 64-bit Windows 7.

  1. Tim,

    No doubt your users will enjoy the experience provided by a 2nd vCPU. However, most deployments with Windows XP and 7 are still using 1vCPU and 2GB RAM. I have not come across many performance complaints for this configuration, unless you are having issues with your infrastructure.

    – Check your physical CPU performance and make sure it has some good GHz. Also check that you are not overcommitting your hosts from a CPU perspective. That will translate into desktops per physical CPU core.

    – Poor performance could also be dictated my slow storage performance. The inability to provide the amount of IOs required can definitely hinder performance. When you add more RAM to the desktop you are ensuring that there won’t be as much Windows paging swap, therefore reducing the amount of IOs required.

    If you have your infrastructure balanced you should not have issues for most common use cases.


    • AJ on 01/08/2013 at 3:26 pm

    Reduce your storage footprint / Maximize your storage utilization:

    Hi Andre,

    When you set the Parent VM with a 50% memory reservation it will make the storage footprint drop to 1GB for every Linked Clone desktop. When will the ESXi (server) claim the extra 50% reservation? If I’m correct when a virtual machine that accesses full reservation, a ESX Server will not reclaim the 50% memory capacity being used, even if the virtual machine becomes idle and stops accessing memory.

    -1) Are there any pro’s or con’s by applying this method for the overall performance of the VDI desktops?
    -2) Can you please explain this setup (eventually overall behavior) in a bit more details?

  2. AJ,

    When the VM is created the vswap file will also be created and set as 50% of the VM memory size. Swap will naturally occur when there is memory pressure. When you apply memory reservation you are reducing the amount of shared memory that could possible be used for TPS (Transparent Page Sharing), and in doing so you are potentially reducing the consolidation ratio for a given host. You can use memory reservation, but use it carefully.

    I hope that helps,

Leave a Reply