Nutanix continues to innovate at an incredible pace, with new features released every few months. All that is possible only because the data and management fabrics are completely independent from the hypervisor, not in-kernel. This enforced detachment has also allowed Nutanix to implement features such as non-disruptive upgrades for the entire stack, including drivers, firmware and the hypervisors themselves. BIOS automated upgrades anyone?!
Anyhow, most hyperconverged solutions utilize a combination of HDDs and SSDs to deliver both capacity and performance for workloads at acceptable costs. However, Nutanix was also the first vendor to deliver All Flash hyperconverged clusters, providing sub-millisecond latency across entire datasets for latency sensitive workloads.
I actually would argue that the large majority of workloads don’t require All Flash clusters because applications active working sets are often small and fit inside SSDs available in hybrid clusters. Nutanix provides a simple way to identify application working sets sizes and the read source tier – RAM, SSD or HDD.
Moreover, by using Data Reduction and Data Avoidance techniques, such as in-line deduplication and VAAI offloads, Nutanix demand even less SSD capacity to host multiple VM datasets on the hot tier. My colleague Josh Odgers has a very good post on Advanced Storage Performance Monitoring with Nutanix.
However, SSD is a more expensive resource and keeping all VMs in SSD at all times is not often economically viable.
All Flash Only When Required
As you probably have seen Nutanix recently announced a feature that will allow flash pinning even on hybrid nodes. This is not yet released, but when it is, it will allow VM’s or virtual disks to be pinned to the flash tier on a discrete basis.
A cluster running a SQL database workload with a large working set alongside other workloads may be too large to fit into the hot tier and could potentially hit the cold tier. For extremely latency sensitive workloads this could seriously affect the read/write performance of the workload.
The new VM Pinning feature will provide administrators with the ability to tell Nutanix clusters that a particular disk or VM belongs to a latency sensitive mission critical application and should never down migrate data blocks from SSD to cold tier to free up space in the SSD tier.
The pinning process is non binding, not requiring the full VM/disk to be pinned to SSD, and will allow administrators to optimize how much disk should be pinned to the SSD tier.
[Update] On 4.5 GA release the SSD Pinning was removed from the PRISM user interface to maintain user simplicity and aesthetic. SSD Pinning is still available via CLI.
Could this be the end of the All Flash?
No, not at all… there are some ginormous workloads out there with immense active working sets and very low latency requirements that definitely require the All Flash treatment; and we have many of them running on Nutanix today, specially in the healthcare and banking verticals. But, for the large majority of workloads existing in mainstream enterprises the current hybrid approach + VM pinning will be more than enough to guarantee the required performance and SLAs. I would venture say That’s It for All Flash solutions when dealing with such workloads.
There’s a lot more to be said about this new feature, and how the reservation process works in tandem with the entire cluster activity. I will soon publish a video demonstration of the feature.
This article was first published by Andre Leibovici (@andreleibovici) at myvirtualcloud.net