Nutanix Traffic Routing: Setting the story straight

There are many great benefits to hyper-converged technologies, the underside is that there can be some mis-understandings from the community  about how the product and architecture actually works.

Specifically there has been some discussion of late about the I/O path through the hypervisor and there appears to be a knowledge gap of how this actually works.

Let me take this opportunity to educate and clarify….

There was some mis-information recently posted stating that Nutanix extends the I/O path, forcing VM I/O to go through the hypervisor twice; being the first time when it leaves the guest VM and the second time when it leaves the Nutanix Controller VM.

While it is true that the VM traffic goes through a VSA, it is incorrect to say that VM traffic goes via the hypervisor to access disk devices twice. Nutanix uses VMDirectPath I/O pass-through to allow direct access to disk controllers and all disk devices, removing the I/O transposition. The hypervisor is actually oblivious of the SSD/HDD devices. This allows the native controller drivers to be used, avoiding any bugs that may be introduced by non-standard or custom written drivers for the hypervisor. The picture below demonstrates the simplicity of the Nutanix data path with VMDirectPath I/O.




The picture above also demonstrates that all the IO is served via the CVM, where most Read IOs are actually served from RAM providing microsecond read latency experience that Nutanix customers love; whilst unfortunately VSAN is not in the enviable position to provide such benefit because it relies solely on a single flash device per disk group per server. (read this test that generated half million IOPs with single block)

Again, most unfortunate are the claims against the VSA approach because when you look deeper at how VSAN is able to offer basic table-stakes enterprise data services you quickly notice that this story is flawed. To deliver simple day-to-day services like compression, de-duplication and encryption VSAN needs to rely on 3rd party external VSAs that are not maintained by VMware and therefore do not have any type of code inspection to ensure security and reliability.

Such these services also run in hypervisor user space and also require additional resources such as RAM and CPU for the applications and the multiple virtual machines. Furthermore, that also dismantle the ‘single throat to choke’ for support story.




I’m happy to be able to set the story straight with the VSA approach. I believe that the benefits of having an integrated and unified data services aid in providing a robust, secure and future-proof platform that at the end of the day benefits customers.


This article was first published by Andre Leibovici (@andreleibovici) at


1 ping

Skip to comment form

    • forbsy on 03/29/2015 at 1:05 pm

    Hi Andre. To be fair I think you have to give the entire story as to why some believe that Controller Virtual Appliance presents performance issues over an integrated hypervisor approach. Yes, it’s incorrect to state that VM traffic goes through hypervisor twice, but that’s hardly the only issue. Frank Dennemen provides a nice writeup of kernel mode vs virtual appliance approach (Basic elements of the flash virtualization platform – Part 1):

  1. Great post Andre, it seems timely that I have published the following article which addresses Forbsy’s comment.

    • forbsy on 03/30/2015 at 8:47 am

    @andre – Don’t CVM’s also require additional CPU and RAM resources? If those resources are accounted for when sizing for Nutanix, why wouldn’t hold the same to size for security and compression/dedupe appliances for VSAN? I’m not sure you can slam one approach when CVM’s fall into the same boat (as far as resources).

  2. @forsby, Ultimately it doesn’t matter; the point is that some people have been emphatic about VSA approach when their story is actually flawed and require VSAs (in the data path) for data services that every storage platform has by-default nowadays. This VSAs also introduce a SPOF for a rather distributed storage architecture.

    • forbsy on 03/30/2015 at 9:26 am

    @andre. You make good points about solutions like VSAN that might be talking out of both sides of their mouth. I personally prefer a solution like PernixData because it allows for those data services to remain on the enterprise SAN/NAS, while using the benefits of kernel integration to deliver the most efficient IO. I suppose I’m referring to FVP when discussing the benefits of kernel integration vs VSA’s.

  3. @forby, Pernix FVP is a good solution to amplify SAN performance or extend life of exiting arrays. The Nutanix distributed architecture is a different model for the datacenter altogether. We need another article to discuss datacenter architectures.

    • forbsy on 03/30/2015 at 10:02 am

    @andre. I’m very familiar with datacenter architectures :). I kind of disagree with your perspective but that’s ok :). FVP decouples storage capacity from performance, and allows for linear scaling of datacenter resources – without a rip and replace architecture. It also employs a distributed architecture. I would argue that what FVP and Nutanix provides is more similar than you think.

  4. @forby I just would like to say I very much appreciate your readership, contributions and commentary over the years. Your first comment in my blog dates back to 2013.

    • forbsy on 03/31/2015 at 11:14 am

    @andre. I enjoy reading your blogs. Always very informative! I might not always agree 100% with what you might say, but what fun would things be if we all agreed, all the time :-)? Looking forward to your next blog.

  1. […] refers to “hypervisor”, not to “ultra”). The image below from Andre Leibovici’s post, Nutanix Traffic Routing: Setting the Story Straight, shows the much more elegant and efficient access to data enabled by […]

Leave a Reply