One great aspect of working for a cutting edge storage vendor such as Nutanix is that for all of the competitive sniping in the marketplace, the technology really does speak for itself.
A question that was asked of me recently by an individual new to SDS had me thinking about the traits that make solutions different from each other. Naturally, different solutions have distinctive architecture, features and capabilities, but occasionally they are not enough make a clear distinction.
I was posed a question about difference between Nutanix and other SDS vendors. As this person was totally new to the technology I could not talk about the fundamental differences between MapReduce and other metadata management methodologies, or the differences between replication factor based on 4Mb extents versus large block chunks, or the benefits of using commodity x86 hardware versus application-specific integrated circuit (ASIC), or even between hypervisor embedded versus user space.
In this specific case where I really needed to simplify things, I used the metaphor comparing SDS solutions to cars and there are different types of cars, with selected features/ functions and you pick the one that suits you best.
One thing that struck me when I recently came across the new VMware Virtual SAN for Horizon View paper, was the vast differences in requirements between VSAN and Nutanix. These have huge impact from a hardware and budgetary perspective. To assist from a customer perspective, I want to comment on few important differences between the two products.
Before I move forward with this article I would like to state that I consider VMware vSphere the best overall hypervisor on the market today, but that doesn’t mean every feature is the best of breed.
HDD Linked Clones – For Linked Clones Virtual SAN recommends the use of 4 x 15,000 RPM SAS disks. Nutanix runs the exact same workload with better user experience and latencies using significantly cheaper and lower performance SATA drives (4 x 7.2K SATA).
Another option for Nutanix is to limit Linked Clones to the SSD tier while using native VAAI-NAS cloning, thus reducing the amount of IOPs required.
HDD Full Clones – For Full Clones Virtual SAN recommends the use of 12 x 900GB 10K SAS drives. Nutanix runs the exact same workload with better user experience and latencies using the same 4 x 7.2K SATA disks.
These benefits not only relate to CAPEX costs, but also OPEX thanks to the reduced power consumption due o the reduced footprint. My colleague Martijn Bosschaart wrote an excellent article demonstrating how Nutanix OPEX operate in regards to power and cooling while comparing to VBlock and FlexPod. I highly recommend the reading it here.
HDD Capacity – VMware Virtual SAN paper recommends 1.2TB of RAW capacity for Linked Clones which is somewhat similar to what Nutanix would require since de-duplication and management is done in the virtualization layer via View Composer Linked Clones, but when it comes to full clones the requirement is 10.8TB per node, while Nutanix due to the performance and capacity de-duplication features uses only 1.26TB per node.
Network Adapter – The Horizon View RA with VMware Virtual SAN recommends the use of jumbo frames. Jumbo frames decrease the CPU utilization of the network stack, in turn increasing the potential consolidation ratio on hosts in which CPU utilization becomes a bottleneck. There are benefits in using jumbo frames with Nutanix too, but not a requirement.
[Update] Another document from VMware mentions that Jumbo Frames are not a requirement.
IOPs – VMware Virtual SAN relies on CBRC to offload read IOPS from the cluster and network when using Linked Clones. CBRC is a 100% host-based RAM-Based caching solution that helps to reduce Read IOs issued to the storage subsystem and thus improves scalability of the storage subsystem while being completely transparent to the guest OS.
While CBRC does offload read IOs it uses a mechanism that burst CPU and IOPs during the data hashing process. This process runs every time a desktop pool is created or recomposed; and VMware recommends administrators to only execute these operations during maintenance periods. Nutanix provides the same CBCR benefit using in-line performance tier de-duplication on SSDs and in-memory, providing similar microsecond latency user experience.
Memory – The reference architecture mentions that each server commits 70% of the total available memory, probably for cluster HA purposes. The total amount of memory per host used for 100 desktops is 165GB and CBRC is 2GB. VMware has a 256GB memory requirement, thus Virtual SAN allocates approximately 12.2GB of memory.
To support Linked Clones Nutanix requires 16GB memory, but no additional CBRC (2GB) is required since Nutanix perform block de-duplication.
To enable block de-duplication at the performance tier Nutanix requires 24GB RAM, while enabling MapReduce de-duplication for the capacity tier 32GB RAM is required. Please note that full clone VMs do not require MapReduce de-duplication when cloned using VAAI.
I would have given Virtual SAN a slight advantage on the memory argument over Nutanix if was not for the following sentence in the VMware vSphere documentation center “During prolonged periods of high random write I/O load from multiple virtual machines, the system might experience high memory over commitment. This can make a Virtual SAN cluster unresponsive. Once the load is decreased, the cluster becomes responsive again” (link)
Nutanix also allow administrators to assign additional RAM towards caching for further performance improvements. In the Nutanix reference architecture a total of 32GB was assigned to the Nutanix Controller VMs since vSphere hosts were not overcommitted.
Datastore – It’s a documented fact that VMware Virtual SAN release supports a maximum of 2,048 desktops while maintaining the ability to maintain data protection. You can still have 3200 VM’s in a VSAN Cluster, but only 2048 will be protected. Virtual SAN has also a soft limit of 100 virtual desktops per host.
Nutanix imposes no limits on the number of VM’s protected per datastore and that because Nutanix supports multiple data stores per cluster there is no limit to the number of VM’s that can be protected even taking VMware limits into consideration.
CPU – Nutanix uses 8 vCPUs to run all features, including data replication, de-duplication, compression, backups, snapshots, replication, date tiering, etc. Virtual SAN is said to use a maximum of 10% of the total host CPU.
However, a quick look at the VSAN reference architecture demonstrates that it can easily utilize close to 40% of the available host CPU cycles to deliver the required IOPs.
[Update 1] I was corrected by Wade Holmes, in the VSAN Reference Architecture each host has a total of 46.5Ghz available, and 2.5GHz is the average used. This is approximately 10% of the total amount of GHz available.
[Update 2] Upon mention by a reader I re-analyzed the results in the reference architecture. According to the reference architecture at peak VSAN uses approximately 7GHz per ESXi server during Heavy workload. VSAN uses 15.1% of total CPU and not 10% as advertised in their reference architecture. That would all be fine if VMware was not advertising that VSAN never uses more than 10% CPU. Read more in my comment below.
Windows RDS – VMware Horizon 6 enable application remoting via Windows RDS. On Nutanix each Windows RDS server can be cloned in approximately 6 seconds and are natively de-duplicated; as matter of fact Nutanix will only operate metadata avoiding data duplication during the cloning process. For this reason, creating 1 or 100 RDS servers will not impact on performance or capacity.
The Horizon View with VSAN reference architecture does not include RDS application servers, even thou it’s an inherent part of the solution. The lack of data de-duplication services in VSAN penalize this type of deployment with large storage capacity requirements and high SSD caching stage/de-staging operations.
So, does Nutanix run Horizon 6 more efficiently than VSAN?
As you observe in this simplistic comparison, SDS solutions are not made equal and they do perform differently under similar conditions and with comparable amount of assigned resources. In the referred reference architecture Nutanix provides better resource utilization with less hardware requirements and additional 10 virtual desktops per host.
Adding all that to the linear scale-out approach, the ease of management, the performance delivered from in-memory and SSD caching, the automated tiering, the data locality and shadow cloning make Nutanix the clear platform leader for Horizon View Linked Clones or Full Clones for persistent or floating desktops.
I’ll leave you with this great pictorial.
One of our Nutanix customers was so enamored with the heat consumption & savings on Nutanix (left) vs competitor rack (right) they decided to take infrared thermal pictures of their datacenters. Looking forward to what’s next, it only gets better for the customer.
I wrote this article from the best of my knowledge and comparing numbers with the published Horizon View w/ VSAN Reference Architecture. If you feel my article or numbers are not correct or do not portrait how either of products work or behave please feel free to advise and I will update the article accordingly.
This article was first published by Andre Leibovici (@andreleibovici) at myvirtualcloud.net.