Oct 09 2014

Nutanix 4.1 Features Overview (Beyond Marketing) – Part 3


Couple weeks ago I published the first and second parts of a multi-month announcement for NOS 4.1; this is the third part of the announcement. If you missed the first two parts you can read it at Nutanix 4.1 Features Overview (Beyond Marketing) – Part 1 and Nutanix 4.1 Features Overview (Beyond Marketing) – Part 2.

NOS 4.1 delivers important features and improvements for the areas of resiliency, security, disaster recovery, analytics, supportability and management. The first article explained the new Cloud Connect feature for cloud backup, the NX-8150 platform for heavy OLTP and OLAP/DSS workloads such as Exchange and Oracle databases, the Data At Rest Encryption for secure environments that require compliance, which I complemented with the article describing how it works New Nutanix Data-at-Rest Encryption (How it works), and finally the One-Click Hypervisor and Firmware Upgrade.

In the second part I focused on smaller improvements that permeated NOS releases between 4.0 and 4.1, 4.0.1 and 4.0.2, augmenting performance, system usability and user experience. That included things like Volume Shadow Copy Service (VSS) support for Hyper-V hosts, Multi-cluster management, Simplified drive replacement procedure, amongst others.

This third part will focus on what I consider perhaps the big milestones for this release; the Nutanix NX-9240  All-Flash appliance and the Metro Availability feature.



  • NX-9240 (All Flash)

Nutanix scale-out solution already handles the large majority of workloads using existing NX models. For workloads with very large active datasets or write IO intensive that require additional performance Nutanix introduced, as part of the NOS 4.1 release, the NX-8150 for those Tier 1 business critical workloads and it can be mixed with existing Nutanix clusters.

The new NX-9240 appliance is built to run applications with very large working sets, such as databases supporting online transaction processing (OLTP) that not only exceptionally fast storage performance, but also demand predictable and consistent l/O latency that flash can deliver. The new NX-9240 is 100% All flash storage, offering ~20TB RAW per 2U.


Flash capacity is optimized using Nutanix’s scale-out compression and de-duplication technologies that leverage unused compute resources across all nodes in the cluster, avoiding performance bottlenecks.

Differently than other solutions, this is a true scale-out All Flash storage where storage capacity and performance are augmented simply by adding nodes, one-at-a-time, non-disruptively, for 100% linear scalability with no maximum limit.


In this first release (NOS 4.1) the NX-9240 All Flash nodes cannot be mixed with other node types because the new nodes do not have the concept of automated tiering, given it’s all flash. Therefore a new cluster must be created only with NX-9240 nodes; however all other NOS capabilities such as disaster recovery, backup and even the new metro cluster availability can be used between different clusters. A future release of NOS will allow the mix and match of All Flash and Hybrid nodes.



  • Metro Availability

Over the last couple years Nutanix introduced many features around availability and resiliency to the platform.  Today Nutanix has built-in self-healing capabilities, node resilience and tunable redundancy features, virtual machine centric backup and replication, automated backup to Cloud, and many other features vital for running enterprise workloads.

However, Business-critical applications demand continuous data availability. This means that access to application and user must be preserved even during a datacenter outage or planned maintenance event. Many IT teams use metro area networks to maintain connectivity between datacenters so that if one site goes down the other location can run all applications and services with minimal disruption. To keep the applications running, however, requires immediate access to all data.


The new Nutanix Metro Availability feature stretches datastores and containers for virtual machine clusters across two or more sites located up to 400km apart. The mandatory synchronous data replication is natively integrated into Nutanix, requiring no hardware changes. During the data replication Nutanix uses advanced compression technologies for efficient network communications between datacenters, saving bandwidth and speeding data management.

For existing Nutanix customers it is good to know that the implementation of the metro availability feature uses the same concepts of data protection groups existing in PRISM for backup and replication across Nutanix cluster, just now adding a synchronous replication option where administrators are also able to monitor and manage cluster peering and promote containers or break peers.


By default, the container on one side (site) is the primary point of service, and the other side (site) is the secondary and synchronously receives a copy of all the data blocks written in the primary point site. Since this is done on a container level, it’s possible to have multiple containers and datastores, and the direction of replication can be simply defined per container.

The Nutanix Metro Availability supports heterogeneous deployments and do not require identical platforms and hardware configurations at each site. Virtualization teams can now non-disruptively migrate virtual machines between sites during planned maintenance events, providing continuous data protection with zero recovery point objective (RPO) and a near zero recovery time objective (RTO).

The requirements to enable metro availability are simple, being enough bandwidth to handle the data change rate, and a round trip time of <=5ms. A redundant network link is also highly recommended.


  • Network
    • <=5ms RTT
    • Bandwidth depends on ‘data change rate’
    • Recommended: redundant physical networks between sites
  • Infrastructure
    • 2 Nutanix clusters, one on each site
    • Mixing hardware models allowed
  • Hypervisor
    • ESXi (other hypervisors soon)

What I like the most about the Nutanix platform is that using One-Click NOS, Hypervisor and Firmware Upgrade customers will be able to start using the new feature, as soon it is available. This is the power of the true software-defined datacenter.

Over the next few articles I will have a chance to really deep dive into the stretch clusters technology, discuss deployment and failure scenarios, site consistency options, recovery options, VM availability etc.






  • System Center Operations Manager and System Center Virtual Machine Manager

Nutanix now offers single pane of glass management in Microsoft environments with full Storage Management Initiative Specification (SMI-S) support by SNIA.

The SMI-S defines a method for the interoperable management of a heterogeneous Storage Area Network (SAN), and describes the information available to a WBEM Client from an SMI-S compliant CIM Server and an object-oriented, XML-based, messaging-based interface designed to support the specific requirements of managing devices in and through SANsHere is a good introduction to SMI-S by the Microsoft team.

The integration allow Microsoft administrators to monitor performance and health of Nutanix software objects such as clusters, storage containers, controller VMs and others via SCVMM; and also monitor Nutanix hardware objects such server nodes, fans, power supplies and others via SCOM.

Here are some screenshot examples, but I will overtime write more about each individual management pack.


Nutanix Cluster – Containers View (Click to Enlarge)

Nutanix Cluster – Performance >> Clusters (Click to Enlarge)

Nutanix Cluster – Disk View (Click to Enlarge


With this post I am almost complete with this series of articles talking about the new capabilities in NOS 4.1. Well, almost… there are still couple things that I will discuss later on, but for now I will start to deep-dive a little more into all the new goodness and innovation Nutanix is delivering with this release.


This article was first published by Andre Leibovici (@andreleibovici) at myvirtualcloud.net.

Permanent link to this article: http://myvirtualcloud.net/?p=6665

Oct 02 2014

How to Fingerprint existing VMDK/VHD/RAW for De-duplication on Nutanix

In my article How to enable Nutanix De-duplication per VMDK/VHD/RAW I explained how to enable or disable de-duplication and fingerprinting-on-write for individual VMDK/VHD or RAW vdisks via nCLI.

Once de-duplication and fingerprinting-on-write are both enabled for a given vdisk NOS will process every new write with the US Secure Hash Algorithm 1 (SHA1) using native SHA1 optimizations available on Intel processors. The created hashes are then utilized for post-process de-duplication executed by the Curator background process. However, I have omitted a somewhat important information about this process.

Because only new writes are fingerprinted-on-write, previous writes that now constitute data blocks in existing vdisk are not fingerprinted and therefore will not be considered for the post-process de-duplication.

In order to fingerprint existing vdisk data it is necessary to utilize a new nCLI command called vdisk_manipulator. The vdisk_manipulator is useful to allow you to fingerprint entire disks with existing data after a Nutanix NOS 4.x upgrade or after enabling de-duplication for the first time in a container.


To fingerprint an entire vdisk

% vdisk_manipulator –-vdisk_name=NFS:90967668 –operation=add_fingerprints

(click to enlarge)

Screen Shot 2014-10-02 at 2.29.28 PMNote: The process may take a little while depending on the disk size because NOS will need to map and hash all vdisk data blocks .


 To fingerprint  portion of a vdisk, as an example the first 10GB

% vdisk_manipulator –-vdisk_name=NFS:90967668 –operation=add_fingerprints –end_offset_mb=10240


Fingerprinting only a portion of the disk is useful in cases whereas a vdisk contains the System OS (Windows, Linux, etc.) and at the same time it contains a large amount of non-dedupable data such as videos, application level de-duplicated data such as Exchange and Zip files, or transactional databases. The process of manually fingerprinting data generates a large amount of metadata that overtime may demand extra RAM for the Cassandra distributed database, therefore be thoughtful about fingerprinting unnecessary vdisks. In cases where de-duplication is not ideal it’s suggested enabling compression for the container.

The vdisk_manipulator has also additional options to delete fingerprints and compress/decompress.

Please note that if you have enabled post-process de-duplication at the container level when you first created the container all data in every vdisk is automatically fingerprinted-on-write and de-duplicated.

As you can see the functional implementation for fingerprinted-on-write and de-duplication is per vdisk, but in Nutanix PRISM is has been exposed as a container feature for ease of use and simplicity.


If you want to learn more about on-disk de-duplication I suggest Nutanix 4.0 Hybrid On-Disk De-Duplication Explained or the Nutanix Bible.


This article was first published by Andre Leibovici (@andreleibovici) at myvirtualcloud.net

Permanent link to this article: http://myvirtualcloud.net/?p=6659

Sep 30 2014

Web-Scale IT: Are Enterprises Ready? Free webinar tomorrow.

I am honered to be part of debate on Web-Scale IT promoted by GigaOm Research along with Mike Karp, Vice President & Principal Analyst, Ptak, Noel & Associates and David S. Linthicum, SVP, Cloud Technology Partners. The panel is happening tomorrow Wednesday, October 1, 2014 (sorry the late notice) at 10:00 a.m. PDT.

By some estimates, within the next five years web-scale IT will be present in more than half of the world’s enterprises. Web-scale IT has enabled companies like Google, Facebook and Amazon to achieve unprecedented scalability, agility and simplicity. Can enterprises learn from the way some of the largest and most successful companies buy, deploy and operate their infrastructure?

This webinar will help enterprises understand the fundamentals of web-scale IT and how other companies are using web-scale principles to build private cloud-like environments.


What Will Be Discussed:

  • What is web-scale IT, where did it come from, and what does it offer?
  • What are the shortcomings of existing IT environments and how can web-scale IT address them?
  • What are the design goals, principles and characteristics of web-scale IT?
  • How does web-scale IT relate to other trends, like software-defined data centers, scale-out computing, hybrid cloud, DevOps and hyperconvergence?
  • How can IT infrastructure incorporate web-scale IT to meet the demands of cloud-like operation and delivery?

Who should attend:

  • CIOs
  • IT decision makers
  • Business strategists and decision makers
  • Cloud platform providers
  • Service provider executives
  • Data managers/developers
  • Enterprise software and technology vendors


Click here for registration and join us!


This article was first published by Andre Leibovici (@andreleibovici) at myvirtualcloud.net

Permanent link to this article: http://myvirtualcloud.net/?p=6656

Older posts «

» Newer posts