«

»

A picture is worth a thousand words, a video even more… (EVO:RAIL vs Nutanix)

A Nutanix prospect published this INCREDIBLE video with a bake-off (POC) they did between EVO:RAIL and Nutanix. Now, before people go and say it’s not a apple to apples comparison I gotta say that although the hardware model between the two solutions might be different, the price list for the solutions is only approximately $5K more towards Nutanix, yet providing much better performance and much lower $/IO and $/GB due to Nutanix compression, de-duplication and VAAI. Reach out to one of the many Nutanix partners to run your own tests and comparisons.

 

Here is what the customer had to say about the performance tests.

My company is currently evaluating the Nutanix and EVO:RAIL hyper-converged platforms. Wanted to share the outcome of a test I ran.

Very basic test … how quickly can each of these platforms deploy 50 guestVMs from template. The template consists of a single 60GB ThickEagerZeroed vmdk. The guestVMs are being deployed via a PowerCLI script. Nutanix took just over 1 minute 35 seconds to deploy all 50 guestVM’s. EVO:Rail finally finished in just over 1 hour and 45 minutes. Crazy performance difference!

 

[Watch in Full Screen]

 

To the now Nutanix customer, thanks for sharing this video!

This article was first published by Andre Leibovici (@andreleibovici) at myvirtualcloud.net.

18 comments

Skip to comment form

  1. kjolivier

    :popcorn:

  2. Dan D

    “Andre Leibovici – I am currently working for Nutanix as Director of Technology Alliances.”

    Compelling marketing video, but not quite scientifically rigorous to be of actual value.

  3. Chris AB

    This is funny!
    The script seems to request “full clones” through the vSphere API.
    3 TB (50 x 60 GB) written over 1 hr 45 min (at approx 0.5 GB/sec) in the case of VSAN.
    Nutanix instead does “thin clones” through the VAAI call. So, it writes 60 GB, plus 50-odd small deltas in 1 min 35 sec (guess what — that’s approx 0.5 GB/sec).

    So, let’s get serious. Such posts undermine the credibility of an otherwise good product (Nutanix).

  4. Andre Leibovici

    @Dan D, I did not record or published this video on YouTube. As matter of fact I don’t even know who did it and I also do not know the company. I was later told it was a prospect that is now a Nutanix customer.

  5. Andre Leibovici

    @Chris AB, Does it matter if a racer has longer legs than other competitors? No, it doesn’t.

    Nutanix has implemented VAAI primitives and they do metatada operations to avoid unnecessary data copy and improve operations performance. vSAN could do the same, but it doesn’t.

    The same could be said about space allocation when Nutanix implements de-duplication and compression to save storage capacity and vSAN doesn’t.

    I don’t see how this an unfair comparison.

  6. BIG M

    Nice thanks for the video

  7. Michael

    The test should be repeated for both products on a dell 730. Also interesting would be to test it on Vsphere 6.0.

  8. John Nicholson.

    Why is it I always feel like clone offload is a party trick that no one actually uses in production unless they are doing something dumb/wrong. VMware has a Linked Clone API natively that works great and avoids sending unnecessary requests in the first place. For those of us who actually clone out 50 VM’s at a time a lot (VDI) we use Linked Clones to avoid spamming unnecessary clone operations. Combined with Sparse Disks for reclaim, I’m kind of skeptical of the long term value of block based data reduction for VDI.

  9. Chuck Hollis

    Chuck from VMware here, I work on VSAN.

    A colleague sent me this link. I felt obliged to comment.

    What seems to be happening is that writes associated with the cloning operation are being deferred on the Nutanix box, but not on the VSAN system, e.g. thin clones.

    Why do I say that? Because doing full, logical clones of 50 60GB templates (3 TB) in 1 min 35 sec would imply an aggregate disk copying speed of around ~31GB per second — way, way beyond current disk or flash technology.

    As you know, the downside of deferring writes is that, depending on workload, those 50 VMs and their VMDKs may be awfully slow on startup. No free lunch when it comes to cloning, I’m afraid.

    Your comment about “longer legs” misses the point, it’s about taking the subway to the marathon finish line and claiming you won the race. It appears that the VSAN-based system did the requested work (e.g. full clones), the Nutanix system saved that joy for later. Both approaches have their pros and cons — which would have been nice of you to discuss.

    I wanted to verify my suspicions, but I hit a dead end. Seems like “DY” who posted this used a scratch account, and didn’t want to be contacted. I can see why — it’s the storage equivalent of a party trick.

    I see from the comments that others have taken you to task for promoting this on your blog. I’d agree with them — this doesn’t reflect well on you or your company. You post it, you endorse it — and so does your company. Personally, I’d never do something like this.

    Let’s be clear: I have no problems with head-to-head comparisons that reflect what customers want to know about competing products. But this sort of nonsense just muddies the water.

    Hopefully we can do better in the future?

  10. Chris AB

    Andre, the comparison should be against Linked Clones on Evo/vSAN.
    Your argument would hold water, if VMware did not have thin/linked clones. But they do. Comparing against Full Clones makes no sense.
    You’ve been at VMware. I am sure you know about the differences.

    As a prospective customer, I would like to know how Nutanix’s thin clones compare to VMware’s equivalent — linked clones.

    I am surprised you still insist on the legitimacy of this comparison!

  11. Andre Leibovici

    @John Nicholson, linked cloning (View Composer) is a technology developed for Horizon View and can ONLY be used with Horizon View or XenDesktop (MCS). The new clones are created and managed at the application layer and with an external database, being completely independent from the storage layer from a management perspective. I am not sure how you would compare that to Linked Clones since it’s only used for Horizon View. That’s a completely different technology.

    ——

    @Chuck Hollis, You say “As you know, the downside of deferring writes is that, depending on workload, those 50 VMs and their VMDKs may be awfully slow on startup”. May I suggest you to create a video cloning and booting 50 VMs on EVO:RAIL and I will do the same on Nutanix (you pick the image) and we will then joint publish the results. Would that be fair?

    I am surprised you saying that using metadata cloning “it’s the storage equivalent of a party trick” since it’s the same type of technology used by XtremIO, the product you have supported and evangelized in your previous role. This is table stakes technology now.

    I endorse the video because it is a purely “technical” performance comparison, going personal on me isn’t necessary and won’t change my opinion. What would change my opinion is if you can provide me with concrete evidence that EVO:RAIL is able to match performance on cloning and booting.

    There’s no FUD here; the performance results are what they are.

    ——

    @Chris AB, “Comparing against Full Clones makes no sense.” – Why? Are you saying that VSAN is only good when cloning Linked Clones and VDI. Remember that Linked Clones is only available with Horizon View.

  12. Andre Leibovici

    @Michael, substituting the server for the Dell730 as you suggest would not make a difference in theme it takes to execute the cloning operations. The performance benefits comes from Nutanix VAAI implementation that is executing metadata operations to clone virtual machines instead of executing a full data copies. EVO:RAIL or VSAN doesn’t have support for VAAI today, therefore it needs to completely copy all VM data when doing full clones, exhausting SSD and HDD performance.

  13. Lukas Lundell

    @Chris –

    You are absolutely right. VMware in this case is copying all the data around and Nutanix is using VMware’s API to allow the filesystem to do smart things with metadata to make the copies instead of unnecessarily copying GBs of data around.

    But this is a pretty normal use case for a customer to want to do, so I don’t think its an unfair test. This video came from a prospect doing a bakeoff and they wanted to determine how fast they could simply cloning a bunch of VMs in vCenter from a template.

    It obviously helps the customer if they can get their work done in a minute versus almost 2 hours. They can move on to building out servers or installing applications immeadiately.

    Linked clones are only a View Composer concept as Andre mentioned.

    @Chuck –

    We didn’t setup this test.

    This is the exact same operation in vCenter that the prospect is doing using the exact same image. They are doing this operation because its part of their normal job to want to clone a VM from a template. The customer wants to be able to clone their VMs without having it take hours or copy all of the VM data around. VAAI is a great VMware API that solves a real customer problem using software and filesystem intelligence.

    “As you know, the downside of deferring writes is that, depending on workload, those 50 VMs and their VMDKs may be awfully slow on startup. No free lunch when it comes to cloning, I’m afraid.”

    This doesn’t make any sense. We aren’t deferring writes… we will never do them. I am sure they could do a boot test as well and see that these VMs start up very fast. Actually, because we use snapshots/metadata to make these clones we both:
    1) Conserve storage space by not duplicating data unnecessarily. Different type of storage conservation than achieved by thin provisioning.
    2) Reduce CPU utilization on the Controller VM (or in your case, the ESXi kernel) by not writing a bunch of unnecessary data
    3) Can cache the parent blocks for all of these VMs in our in-memory and on SSD-cache, greatly improving read performance and cache efficiency … compared to the EVO:RAIL case where you have to cache unique blocks for every VM that hold the exact same data which was cloned 50 times.

    So with 50 VMs and no data churn the storage cache could potentially be 50X more efficient on startup. Why cache the same windows startup files/executables/data 50 times when I can only cache it once and all 50 VMs can benefit from that data for reads since their metadata maps point to the same blocks? Metadata based cloning improves overall read performance, CPU utilization, and cache efficiency. That is why we implemented it instead of doing full copies.

    Every done a test of booting 50 VMware View Linked Clones vs. 50 VMware View Full Clones for VDI? Can save you the trouble, Full Clones take much longer to boot and put more load on the storage system. With Linked Clones we can cache the parent image (and even do something really cool and unique to our system called shadow clones which increases our boot performance even more). With Linked Clones, bootup is faster and there is less load on the storage. Same concept. It’s even slightly better with VCAI and using the array cloning for Linked Clones. So stating that because we used metadata to our advantage will somehow cause us slow performance on bootup because we are “deferring” the writes is not accurate. Opposite is true.

    Lukas Lundell
    Nutanix, Global Director of Solutions and Performance

  14. Josh Sinclair

    VMware and Nutanix customer here. We do desktop and server virt. We have a test/dev environment and clone servers all the time. I can’t think of one use case where we would ever want to have full clones. It doesn’t make any sense. Also the race analogy above doesn’t make any sense either. As a customer I have requirements that I need my vendors to meet. If my requirement is get to the finish line, why would I care how the vendor does it? I want them to do it in the quickest most efficient manner possible because that reduces cost for me. I need them to think outside the box. That’s called innovation and as a customer that is what I’m looking for to make my day easier. If I need to clone an environment to test a change and one vendor can do it in a few seconds and another takes an hour to do it… guess which vendor wins our $$?

  15. Dwayne Lessner (@dlink7)

    Pick a different VMware product, Horizon Air / DaaS . Air doesn’t even have View Composer.

    If people are serious about testing I would encourage all people looking at hyper converged products to POC the minimum allowed, build your environment, performance test, try to destroy your environment and then performance test again. Things are gravy brand new but things fail and then how does it perform?

    Yet another Nutanix Employee.

  16. kjolivier

    This reminds me of the time Jimmy Johnson got chastised for running up the score. From some of these comments you would think Nutanix wrote VAAI and kept it all to ourselves!

    Proud to work for Nutanix!

  17. Josh Odgers (VCDX#90)

    Sitting back enjoying the popcorn.

    @Chuck – I have saved every blog you have written about VAAI, so please, please say anything negative about it and I will quote your blog where you do nothing but talk up how great EMC is for supporting it.

  18. forbsy

    I’d have to agree that it’s always better to compare apples to apples. I honestly don’t know many customers that look to deploy a mass amount of server vm’s. It’s pretty much always desktop virtualization deployments that demand that, and therefore linked clones are what they use. There are also storage vendors that provide VAAI plugins so you can get the SAN/NAS capability of deploying many vm’s with the space utilization bonus that the storage array provides (i.e. deploying vm’s on NFS NetApp via the NFS plugin).
    So, comparing linked clone deployment to the Nutanix VAAI deployment would have been more fair in my eyes. In any case, VVOLS now makes this all a moot point. If the array supports VAAI you’ll be able to deploy space efficient clones without needing View or other plugins now.

Leave a Reply