Why use <Insert Vendor Here> storage for VDI?

The storage technologies and vendors available on the market today have definitely evolved from simple spindle RAID groups and one tier caching mechanism available from 5 years ago. The options are so many that it’s not only difficult to understand and grasp all different technologies, but almost impossible to compare them side-by-side.

Flash (SLC and MLC) is apparently being used in abundance, but solutions with PCIe cards such as FusionIO and Virident are also increasingly being adopted by storage vendors. Flash only arrays such as Violin and XtremIO are also in the list. Some solutions use host RAM for tiering, while others manipulate IOs to serialize or coalesce write streams.

How do we get our head around all this new technology to make adequate business decisions? It appears that every month a new storage start-up creates a new solution and attempts to sell it as if it was the Holy Grail. It can be really challenging for any of us to keep up with this storage revolution.

I asked few storage vendors to provide me an answer on Why use <place vendor here> for VDI?, from a technology perspective. All options have different price points, cost per user and business justifications, however I preferred to focus on just the technology alone.

Below you will find their responses (edited to remove business related comments), along with few of my comments. To create a fair and equitable baseline, I have also removed any references to future technologies, benchmarks, or integrations that are not delivered with current product releases. Vendors are in alphabetic order.


Part two of this article can be found at http://myvirtualcloud.net/?p=4007



Atlantis ILIO is a purpose-built software storage optimization solution that delivers virtual desktops cheaper and better performing than a physical PC. Atlantis ILIO Diskless VDI allows customers to deploy their VDI environment without any storage. Diskless VDI can be used with just commodity servers to leverage its RAM as a NFS or iSCSI data store providing desktops with high IO performance. Profile and user data are stored on centralized datastore. Virtual desktops  run on servers with VMware vSphere and Atlantis Diskless ILIO software. No SAN, NAS, SSD needed to run desktops. Atlantis says that their solution eliminate deployment risks by simplifying storage sizing, design, deployment and management of a large deployment and scale using servers as a building block. According to Atlantis their software is proven and has been deployed in large VDI deployments (I decided to omit customer names)

For persistent desktops Atlantis ILIO can complement SAN/NAS/local storage to offload read and write IO’s and optimize storage.

Key Technology

  • Software only – Purpose built storage layer to run virtual desktops with just CPU and RAM. Scale-out VDI infrastructure with just servers and software. The server now becomes a building block.
  • Inline De-duplication and Compression – Reduce the total amount of memory required to store virtual desktop images by leveraging de-duplication and compression.
  • IO Characterization – Content aware approach optimizes the IO request from windows virtual desktops based on the user activity and applications.
  • Advanced software to enable RAM based storage – Unique architecture that adds no additional latency to RAM.

For persistent desktops Atlantis ILIO can complement shared/local storage to offload read and write IO’s and reduce storage capacity needed. Atlantis ILIO can be used in conjunction with any kind of storage device/vendor – SAN, NAS, SSD’s or even Fusion-IO like devices.

My Comment:

(Disclaimer) Atlantis is a sponsor of this blog. I had the opportunity to run some good tests with Atlantis ILIO in lab environment. The results obtained from the IO characterization and inline de-duplications were impressive. Since then, Atlantis has published a new reference architecture for diskless VDI where only RAM is utilized for non-persistent desktops. I have not tested the diskless solution, but I can imagine how fast this will be for deployments that do not require application persistency.

I also wrote an article about Atlantis features and performance that can be found here (Offloading Virtual Desktop IOs with Atlantis ILIO: Deep Dive).



EMC VNX Series

The VNX Storage array is probably one of the most vetted, documented and tested Virtual Desktop Storage solutions on the market today. EMC has published many different Reference-Architectures and Practical Applications Guides to help point the VDI administrators in the right direction while helping reduce the amount of frustration and time/effort it takes when designing scalable VDI environments.

In addition to the documentation, the VNX array brings a “swiss army knife” approach to the Virtual Desktop Architecture. The “teeth” behind the VNX includes its extended caching feature called FAST Cache which utilizes a set number of EFD’s (Enterprise Flash Drives) as a READ and WRITE IO Caching pool that can be as large as 2TBs. FAST Cache has been shown, repeatedly that it can reduce the number of drives needed to support the performance requirements by 70% to 80% with just a small number of Flash Drives. This helps reduce the impact on things like Bootstorms and Anti-Virus storms.

Other key features of the VNX is the Fully Automated Storage Tiering for Virtual Pools (FAST-VP) that can be optimized for the highest system performance and lowest storage cost by moving blocks of data up and down the tiers of storage, placing them in the most optimal tier based on performance requirements at the time.



The VNX supports 6 Gb/s SAS backend with all the latest drive technologies supported as well as their UltiraFelx I/O connectivity that allows customers the ability to add/change from Fibre Channel, to iSCSI, CIFS, NFS (Parallel NFS), Multi-Path File System (MPFS) and Fibre Channel over Ethernet (FCoE) for converged networking over Ethernet. This approach gives the VDI Architecture the utmost confidence that whatever direction they want to go in their design, will be supported on the VNX. It also allows customers to future proof their designs by giving them opportunities to add/remove technology as it rolls out.

Finally, and what can sometimes be lost in making decisions is the integration between the VNX and the VMware vSphere environment. The VNX is designed to be as invisible as possible from an Administrator point of view. The free VMware Plugins offered by EMC allows the VMware Administrator the ability to provision, deprovision, snapshot, dedupe and expand LUNS and Datastores from inside vCenter. This integration cuts down on the number of user interfaces required to manage the overall VDI infrastructure as well as cuts down on the small mistakes that can happen when toggling between these interfaces.

My Comment:

(Disclaimer) I used to work for EMC. EMC has a solid and field proven storage technology that is very stable and will fit any deployment size. I have seen very large VDI deployments running on VNX Series array and their FAST technology works not only promoting hot blocks from SATA to SSDs, but also serves as write caching. I have previously blogged about it:




NexentaStor is a proprietary derivative operating system built by the developers of the open-source Nexenta OpenSolaris-distribution that has been optimized for use virtualized server environments NAS and iSCSI and Fibre Channel applications built around the ZFS file system. It features iSCSI support, unlimited incremental backups or ‘snapshots’, snapshot mirroring (replication), block level mirroring (CDP), integrated search within ZFS snapshots and a custom API. Through its focus on ZFS, it carries with it potential benefits for virtualized server farms in terms of performance and thin provisioning. (Wikipedia)

NexentaVSA for View (NV4V) is a software solution that comprises Nexenta’s 3rd generation NAS/SAN storage stack and VDI management appliance – the latter designed to interoperate with vCenters and View 5.x servers.

The product effectively bundles two virtual machines (OVF images): NV4V and NexentaStor. Users deploy wizard-driven NV4V designed from ground up to absorb and hide the complexity of deploying and managing virtual desktop datacenters. Ease of use was and the capability to run on vendor’s selected hardware are the two primary motivations to use the product (the very first codename of which was “easy button”).

NexentaStor detects (via vCenter or directly) and natively uses SSDs for write logging and IO caching. This will transparently work for boot-storms and login-storms, for instance. Nexenta appliance provides ZFS storage for virtual desktops, unlimited RAID, unlimited snapshots, end-to-end data integrity and self-healing.

NexentaVSA for View version 1.0 is available for download at http://nexenta.com/vdi integrates the Nexenta’s and View 5.



NV4V supports both local and external storage, whereby the latter can be either Nexenta’s own storage appliance or a 3rd party NAS. For local storage the product automatically provisions and deploys Nexenta’s storage appliances as VSAs – one Virtual Storage Appliance per VMware ESXi host per pool of virtual desktops.

With locality of the VSA storage for the desktops users immediately get several benefits, including the most important one that is sometimes referred to as shared nothing architecture – “distributed computing architecture in which each node is independent and self-sufficient, and there is no single point of contention across the system” (http://en.wikipedia.org/wiki/Shared_nothing_architecture). Indeed, the NV4V “VSA” option makes each host in the ESX cluster independent and self-sufficient, and there will be no single point of contention.

The other benefits apply to both bare metal and VSA: NexentaStor detects (via vCenter or directly) and natively uses SSDs for write logging and IO caching. This will transparently work for boot-storms and login-storms, for instance. Nexenta appliance of course provides ZFS storage for virtual desktops, and therefore unlimited RAID, unlimited snapshots, end-to-end data integrity and self-healing.

My Comment:

Nexenta is the solution I use in my lab environment with two servers. The ability to utilize available host RAM as caching mechanism , and my lack of good & fast shared storage array, is a winner for my lab. Their integration with VMware View is also very interesting, however I have never used it. In my lab environment I am looking forward to put a SSD to be used a L2 cache device to improve write IO performance. I have never seen or used Nexenta in production environments but I know of people and organizations using it for VDI.




Nimble Storage is a flash-enabled hybrid storage solution that combines MLC flash with high-capacity disk. CASL (Cache Accelerated Sequential Layout) – the architecture backing Nimble arrays, is a log structure file system.

In other words CASL coalesces random writes coming into the system and writes them out in a full RAID stripe to disk. There is an efficient garbage collection mechanism at the back-end that ensures that full stripes are available to write to at all times.

Nimble uses flash as a read cache (no writes are staged here even though some writes go to cache based on the IO profile) – this allows Nimble to eliminate overhead of RAID protecting flash. Also the metadata is always stored in cache so even if there is cache miss, the access is accelerated.

Nimble also uses compression and it is turned on at all times. The compression algorithm is a LZ variant and natively supports variable sized blocks. The read storms are addressed using the flash cache and write peaks using the coalescing techniques outlined above. In addition, efficient redirect-on-write snapshots + WAN efficient replication allow instant backup/recovery of persistent virtual desktops.



My Comment:

I have never used Nimble, therefore I am not able to comment on it. Nimble seems to be gaining some traction in the VDI space and I would love to be able to learn more and test the solution. Maybe Nimble can solve that!




Nutanix is a converged (compute+storage) scale-out infrastructure building block that eliminates the need for external storage arrays in virtualized datacenters. Delivered as a turn-key stackable 2U rack-mounted enclosure, Nutanix unifies NFS & iSCSI services, highly available distributed physical storage and vSphere hosts all onto a single tier of commodity hardware. This building block approach allows organizations to start small VDI projects and stack additional blocks together as user populations & capacity requirements grow.

Distributed software and virtualized storage controllers stitch together to form very large and highly available storage pools, exposing standard NFS and iSCSI services to every Nutanix vSphere host. A 3000 user VIEW cluster of 64 servers and 350 TB of storage can be stood up ready for deployment in 2-3 hours, without the headache of storage zoning, multipathing, LUNs/Datastores provisioning, bandwidth congestion, inter-row or rack cabling nightmares and network routing complexity.

Nutanix uses a blend of server attached PCIe Flash and large capacity SATA drives. Nutanix brings Apple-like approach to managing cluster state including: (a) bonjour auto-discovery to detect and configure new nodes in the cluster, and (b) a unique user experience that brings gorgeous consumer grade graphics to the world of enterprise datacenters that dramatically reduces number of steps for common storage configuration and management tasks



The emergence of wireless mobile devices as a dominant force in end-user computing has placed new pressures on Desktop IT. Secure mobile desktops are increasingly becoming an essential service for the modern enterprise, but rely on virtual Infrastructure designed for servers, which don’t always translate well to a suitable platform for virtual desktops. This makes it difficult for Desktop IT to deliver on VDI initiatives in reasonable timeframes, jeopardizing project completion times or worse yet, project failure.

Nutanix storage controllers are delivered as scale-out systems running directly on ESXi, collapse application and storage servers into a single tier of machines that leverage server-side flash. A 2U cluster of 4 nodes replaces 20U worth of equipment. Nutanix Clusters can effortlessly scale beyond 3000 users.

Nutanix supports VMware features, VAAI and VCAI plugins, vMotion, DRS, HA, FT using PCIe server attached Flash such as Fusion-IO. Innovative blend PCIe Flash and SATA tiers maximizes both IOPs and storage capacity at a low price.

My Comment:

(Disclaimer) Nutanix is a sponsor of this blog. I have never used Nutanix, therefore I am not able to comment on it. Nutanix is an entirely new concept, implementing converged infrastructure and unifying storage and compute. That make a lot of sense when talking about VDI while being able to scale-out linearly. Nutanix also seem to be gaining some traction in the VDI space and I would love to be able to learn more and test the solution. Maybe Nutanix can solve that!




VDI’s intense write demand, about 100% random access IOPS per desktop, really stresses the spinning disk. WHIPTAIL is a  100% NAND Flash silicon storage arrays eliminate that stress. The Whiptail array is all silicon and does not use a caching tier, write performance is never compromised.

Most enterprise VDI deployments continue to grow, the WHIPTAIL family of products ensures that performance does not degrade nor disk creep occur as that happens. Starting with the ACCELA, you start out with a pool of 250,000 write IOPS. As you add users, simply deduct the individual IOPS needed until you run out. Once you approach total consumption of the IOPS for the array, you simply migrate the ACCELA array as a node in to an INVICTA, which will allow you to support 15, 000 to 20,000 individual desktops. WHIPTAIL claims to show a VDI boot storm of 600 virtual desktops launching in less than 15 seconds.

WHIPTAIL does its own over-provisioning, never submitting short-writes to the media. According to WHIPTAIL the two biggest problems to NAND devices are short-writing and not aligning to the erase block. These speed up wear and fill the drive with garbage. WHIPTAIL aligns to the erase block every time the write command is put into a buffer and it does not time out until the block is full and fully aligned. The drives never see a write less than 2MB, giving WHIPTAIL full linear writes, thus ensuring good reliability of endurance and write performance throughout the drive life

My Comment:

I have never used Whiptail, therefore I am not able to comment on it. Whiptail seems to be gaining some traction in the VDI space and I would love to be able to learn more and test the solution. Maybe Whiptail can solve that!



At the end of the day most storage vendors, specially start-ups, are trying to play in the VDI space. There are other commonly used storage solutions for VDI that I have not commented in this article and will try to include them in a follow-up post.

  • FalconStor
  • FusionIO
  • NetApp
  • Pivot3
  • Violin Memory
  • Virident
  • XtremIO
  • etc…


Part two of this article can be found at http://myvirtualcloud.net/?p=4007



2 pings

Skip to comment form

    • Dave on 08/09/2012 at 10:45 am

    I would also include Pure Storage in your followup

  1. Great comprehensive writeup here. I’d be interested to hear what your thoughts are regarding Tintri in a similar solution, but it’s nice to see an agnostic review.

    Thanks, Andre.

    • Tim on 08/09/2012 at 2:25 pm

    Thanks for another outstanding article. I’ll also go on record as seconding Matt’s request for a Tintri review.

    • John Nicholson on 08/11/2012 at 10:28 am

    Looking at the different vendor options, a huge theme is AutoTiering to flash. I’m curious on your thoughts of using this feature, vs. View’s built in tiring functionality? Is there a huge gain in efficiency here? My linked clones are generally pretty small (200MB per user) and the cost to just put this into flash, or rely on beefy array DRAM cache to handle the replia load makes me wonder if the associated overhead of block auto tiering vs. just tiring out linked clones/replia’s using View. (data movement, shorter life on flash, fragmentation and general worse performance of thin provisioned LUNs with most vendors).

  2. @Dave
    Sure, I will also include Pure.
    It seems to me that pure goes into the same category as XtremIO.
    They are All-Flash scale-out arrays with inline de-duplication.


  3. @Tim
    I will talk to Tintri during VMworld and try to understand what is their play in the VDI space. Their hardware technology seems similar to every other vendor using NAND devices to deliver a hybrid flash and disk architecture.


  4. @John Nicholson
    The tiering functionality in VMware allows you to select in which datastores each part of a virtual desktop is going to be located. The most common design is to place the replica disks in solid state drives for better read performance.

    However, intelligent arrays with automatic tiering and making use of hybrid flash and disk architecture have, in my opinion, leapfrogged the architecture design proposed by VMware. If you do have a solution that makes good and intelligent use if tiering based on block access you don’t need to use the tiering capabilities in VMware View.

    In regard to automated tiering – For persistent desktops data movement over multiple tiers (SSD,SAS,SATA) may be beneficial. For non-persistent desktops data movement only between flash (SSD) and 2nd (SAS) tier will be beneficial.

    That is actually a good topic to write about.


  5. Why worry about tiering et al for VDI linked clones.. just use server memory as storage and boost performance. Esp given that server memory is going up in capacity and halving in cost every year.

  6. Hi Andre, This is a great article but to provide a complete picture of flash storage solutions for VDI you may want to include GreenBytes in your review.
    GreenBytes’ IOOE with its patented deduplication makes full-clone VDI deployments on flash cost effective. (and is just as relevant for linked-clone deployments)
    Disclaimer: I’m with GreenBytes 😉

  1. […] for VDI? Part 2 October 1st, 2012 Leave a comment Go to comments In August, 2012 I wrote an article discussing storage technologies from multiple vendors and their fit for VDI workloads. The original […]

  2. […] Why use <Insert Vendor Here> storage for VDI? […]

Leave a Reply