Nutanix 3450 up to half million IOPs with single block deployment

One of Nutanix partners (DH Technologies) created this video below demonstrating a Nutanix block with 4 nodes pushing almost half-million IOPS (483,492 IOps). This is over 120,000 IOps per node, potentially achieving astonishing 3.8 million IOPs in a 32 node cluster without the help of expensive PCIe flash devices like FusionIO or LSI Nytro WarpDrive, and potentially getting up to tens of millions of IOPs since there’s no theoretical limits for the Nutanix cluster size.

This is more IOPs per-node than a EMC VNX-7500 series will push for an entire array; or more than a EMC V-MAX platform will be able to push if we have only two Nutanix blocks. Also, more than any other SAN will be able to provide due to bottlenecks given the limited storage controller architecture. Nutanix storage controllers scale-out along with the cluster, not running into bottlenecks.

It all sounds music to my ears, except because of a little very important caveat. The workload has been crafted as 100% read IO with 4Kb blocks, and the working set is only large enough to fit into Nutanix caching tier.

Purposely, most architecture documents, white papers or benchmarks published by most of the vendors will demonstrate their performances in the way that best suits their platforms, and always achieving numbers that will blow you away!

The very important point that I want to make in this article is so you know the next time you see a vendor saying that they can achieve 500,000, 2 million or 5 million IOps you should not trust them! At least, not until before you see and understand how exactly those numbers were generated. This is also valid for other scale-out solution!

Getting the right storage workload requirements can be very difficult. The reason for that is the lack of information about your real production workload; and I guarantee it’s not 100% read with 4Kb blocks.

If you do not have information about your workload or if it’s a new working set it is possible to use some pre-trended baseline numbers, but this would be only an indication to what the workload would likely be. It could be a point out of the curve in many cases.

One of the most important items beyond the number of IOPs is the read and write IO pattern. The Read/Write ratio used determine how many spindles would be required to support the workload in a RAID configuration type ‘x’. Nutanix doesn’t have the concept of RAID groups because read IOs will frequently come from RAM or local SSD due to data locality concepts, while write IOs will always go to the local SSD and to n additional hosts depending on the replication factor (I will write more about that in the future).


Here are two good article by Josh Odgers describing Nutanix data locality concepts:



Ultimately, this video is very cool to watch! But again, be aware that this is only a workload demonstration that is unlikely to have any similarity to real workloads. Also, it’s interesting how easy it is to mislead and drag attention to an article just using titles ala ‘The Register‘.


Thanks to DH Technologies for providing this!

This article was first published by Andre Leibovici (@andreleibovici) at


2 pings

Skip to comment form

    • michael on 04/06/2014 at 3:30 pm

    Which 10 gigabit switches did you use ? Cisco , Arista or other?
    Could you please ran an iperf test from cvm1 to cvm2? Do you get 9 Gbit’s?

    • michael on 04/07/2014 at 1:16 am

    Can you show high iperf results like in this blog? 18,3 Gigabit

    • Vladimir Deneko on 04/18/2014 at 6:11 am

    rather strange test because:
    1. Nutanix recommendation:
    Never use the C: drive for testing. It is NTFS formatted and data will be cached
    and IOMeter can potentially interfere with Windows and vice versa. Ideally in a
    pure test scenario, the C: drives would not be on the system under test. Since
    IOMeter should be our only work-load, this should not be an issue.
    In test we see C drive
    2. How much memory set per CVM?
    3. How much content cache per vdisk set with gflags in test?

  1. @Vladimir Deneko this is not a realistic test as I mentioned in my article. A number of recommendations for performance tests were not followed. It is here only to demonstrate that you should not blindly believe numbers you see published by the majority of storage and converged solutions out there.

    • Michael on 04/19/2014 at 1:08 am

    What happens if you put 8 ssds in one Node? Call it full flash Node.
    One Intel SSD delivers 75000 IOPS. 8 SSD deliver 600.000 IOPS. One VM with a 30 GB harddisk is shown. To get 480.000 IOPS, 8 SSD must be used. for a 4K read workload on sssd , not ram.
    Other design question. What happens when the workload is spread over all 4 nodes and the 8 ssd without replication? Is this possible?
    What happens if instead RAM of 24 GB a diablo technologie 400 GB Dimm or several are used?
    So with this video the potential of the nutanix software was shown.

  2. @Michael, today Nutanix doesn’t have an all flash node. Most working sets will fit just right into the performance tier (SSD and RAM). We do realize however that there is a small subset of applications that would benefit from a all flash node or cluster and we are obviously looking into it.

    The demo you see was recorded with a standard Nutanix 3060 setup, with 2 SSDs, in a cluster with 4 nodes. The trick was to add additional RAM to the CVM and make sure the IOMeter working-set would fit into the CVM RAM. In this case, all reads were being served from RAM, but as I said, this is not a real workload.

    Diablo like technologies will definitely change the game here. I can’t comment on that as yet, but keep tuned 🙂

    • Michael on 04/21/2014 at 5:19 am

    @Andre : Like NVMe
    Let see what the test with Dell 720 brings for performance

    In the end, a deep look at the filesystem is very helpful. 🙂

  3. @Michael, NVMe will likely change the game, and in a good way for Nutanix. NVMe has capacity to fully utilize network bandwidth if the solution doesn’t have something like data locality.

    • Michael on 04/22/2014 at 9:15 am

    Did you do some performance test with storage vmotion on the same nutanix node from one nfs container to the other nfs container. There is a limit of 200 MB/s of the Intel SSD with storage vmotion.

  1. […] such benefit because it relies solely on a single flash device per disk group per server. (read this test that generated half million IOPs with single […]

  2. […] Check out his blog at […]

Leave a Reply