«

»

Nutanix Metro Availability Operations – Part 1

This is my first operations article on Nutanix Metro Availability. Whilst the feature is not yet GA, only in NOS 4.1, I would like to demonstrate how some of the basic operations are expected to work in production.

If you missed the announcement of NOS 4.1, please refer to All Nutanix 4.1 Features Overview in One Page (Beyond Marketing).

When using Nutanix Metro Availability with VMware stretch clusters Nutanix handles the container stretch and the replication. By default, the container on one side (site) is the primary point of service, and the other side (site) is the secondary and synchronously receives a copy of all the data blocks written in the primary point site. Since this is done on a container level, it’s possible to have multiple containers and datastores, and the direction of replication can be simply defined per container.

The Nutanix Metro Availability supports heterogeneous deployments and do not require identical platforms and hardware configurations at each site.

The typical Metro Availability setup involves 2 clusters, one on each site. The clusters are communicating over the network, within a max RTT latency of 5ms. Metro Availability is enabled and subsequently managed from within Prism or nCLI on a per-container basis.

It is recommended ample bandwidth and redundant physical networks to minimize the probability of transient glitches. There are no Metro Availability imposed limits. Administrators must consider available network bandwidth between the two sites. Depending on the duration of the glitch, IO will stall until either the glitch is resolved or Metro Availability is disabled on the container. Also, depending on mode of operation, disabling Metro Availability can be manual or automatic.

Metro Availability works in conjunction with all NOS data management features such as compression, de-duplication, shadow clones, automated tiering and others. Additionally, Metro availability also offers compression over the wire.

The active and standby clusters can have different storage policies per site for the same stretch container, and once setup is complete, the rest of the workflow can be completely automated if the administrator desires. This includes detection of failure, automatic failover to a secondary site and failing back to the original site. Regardless of the automation, Prism provides the ability to failover and recovery with a just few clicks from within Prism.

 

Datacenter Failure Recovery – Operation

In the example below I have two sites replicating distinctive containers to each other. When a Nutanix a remote cluster is unreachable for a defined period of time (user defined via gflags), a datacenter failure will be flagged (1). In this case the shipping of IOs from Site 2 to Site 1 is immediately paused given its unavailability, and the Metro Cluster is automatically disabled. Depending on the time-out window for the defined period of time, there may no impact to the virtual machines on the surviving cluster during the time IOs are paused.

At this point the Metro Availability peering is automatically broken (2) to allow each surviving cluster to operate independently. When the peering is broken the IOs for the green container (site 2) will resume and the virtual machines in site 2 will start responding normally, while IOs on the impacted site will time-out even if the cluster is still operational in the failed site.

The next step is to promote the blue container (3-4), which is the primary container from site 1. The container promotion tells the Nutanix cluster in site 2 that you would like to run virtual machines from Site 1 in the cluster on site 2. When that is done the virtual machines will automatically restart on the surviving cluster (5) and operations will be resumed. The promotion step is a one-click manual procedure, but it can also fully automated with some basic scripting or run-book automation tools.

 

(1)

(2)

(3)

(4)

(5)

 

In Part 2 I will explain the operational procedure to failback operations to Site 1 and resume operations.

* Acknowledgements to Tim Issacs, Nutanix Product Manager, for putting much of this content together, including the pictures.

 

The Part 2 of this article can be found here.

 

This article was first published by Andre Leibovici (@andreleibovici) at myvirtualcloud.net.

1 ping

  1. How I configured my Nutanix Cluster for Synchronous Replication » myvirtualcloud.net

    […] Nutanix Metro Availability Operations – Part 1 (Andre Leibovici) […]

Leave a Reply