Nutanix will soon be making available the NOS 4.1.3 release. This release has major enhancements for the areas of storage optimization, disaster recovery and management. Let’s go straight into the new features and improvements.
- Erasure Coding
Erasure coding offer significantly higher reliability than data replication methods at much lower storage overheads. Erasure coding has been traditionally implemented using RAID groups on disks; however those are commonly bottlenecked by single disk, constrained by disk geometry and generally waste space implementing hot spared. Nutanix EC is a Next-Gen Storage Optimization done across nodes instead of disks, optimizing availability with faster rebuilds and utilizing the entire cluster through map-reduce processes to compute block parities. Read more in my article New Nutanix Erasure Coding & How it works?
- Sync Replication
Sync Replication is used for planned and unplanned events where no data loss (0 RPO), as little downtime as possible (~0 RTO) and non-disruptive site migrations are required. This may include site outages, natural disasters, hardware maintenance, workload migration and disaster avoidance scenarios.
The major different between Metro-Availability and Sync replication is that organizations do no require a stretched vSphere cluster between two or more sites; therefore not requiring layer-2 extensions and overlays.
With Synchronous DR the migration workflow can be scripted and customizations may be added to the flow; however for metro-availability we would be looking for heavy automation with very little workflow customization; such as changing IP addresses etc.
Finally, Sync Replication will work with vSphere and Hyper-V; following by Acropolis in the following release. Metro Availability is only supported with vSphere today.
I’ll soon write more about Sync Replication.
- In-guest iSCSI Support
Nutanix now offers the ability to natively mount Nutanix vdisks into Windows guests via the Windows iSCSI adapter. In the VMware would like to have the ability to mount a vmdk via the Windows iSCSI adapter.
The use cases relate to supporting applications that don’t provide NFS support; such as Microsoft Exchange or Windows Guest Failover Clustering.
- ToR Visibility
Host networking is already part of the Nutanix architecture and is managed by Acropolis (the new control plane); however Nutanix is now extending networking management and monitoring to the entire network stack in your datacenter. Tor integration with Nutanix offer auto-discovery using LLDP and SNMP; and focus on providing visibility of the networking stack via the Prism interface. Furthermore Nutanix announced OpenDaylight integration over the next couple releases. Read more in my article New Nutanix ToR Switch and OpenDaylight Integration.
- Acropolis Image Service and HA
Pre-4.1.3, users of Acropolis hypervisor had no single command to download and covert disk and ISO images to a Nutanix cluster, nor a way to manage a library of existing images. NOS 4.1.3 introduces the ability to upload disks and ISOs from external HTTP/NFS servers while providing an integrated method to dynamically convert disk images from various formats to raw flat disks on NDFS.
Pre-4.1.3, VMs running on Acropolis hypervisor did not have automatic HA to protect against host failures. Nutanix HA automatically restart VMs on another host upon failure. HA may be configured as “Reserved Capacity” or “Best Effort” policy. The HA protection is set on a per-VM basis allowing the administrator to choose how to best implement protection.
This article was first published by Andre Leibovici (@andreleibovici) at myvirtualcloud.net.