Oct 06 2015

Prism Central and NOS 4.5, NCC 2.1, Foundation 3.0 and MORE Tech Preview Features – What’s New!

This week Nutanix is making Prism Central 4.5, Acropolis base software 4.5 and NCC 2.1 releases available. These are major releases with improvements for the areas data protection, performance, security and reliability; and also some new Tech Preview features. This release is the initial step into the vision Nutanix laid out during .NEXT conference back in June.

Prism is responsible for delivering convergence of storage, compute and virtualization resources into a unified system that provides end-to-end view of all workflows – something difficult to achieve with legacy three-tier architectures, while Acropolis builds on the core capabilities of the Nutanix hyperconverged platform and include all the low-level operations in the Distributed Storage Fabric, the App Mobility Fabric to allow workload mobility between hypervisors and clouds.

Some important enhancements for Acropolis hypervisor are also included in this release. Read more about Acropolis Hypervisor here. Also part of this release is the Tech Preview for File Level Restore from backups/snapshots.



In Nutanix fashion these updates are coming only weeks after the last update, and following the traditional non-disruptive upgrade that customers are already used to and love. Let’s go straight into the features.


What’s New in 4.5



  1. VM High Availability in AHV
  2. Cross Hypervisor VM Mobility
  3. Default Container and Storage Pool Upon Cluster Creation
  4. CommVault IntelliSnap Integration
  5. Cloud Connect for Azure
  6. Hyper-V Configuration through Nutanix Web Console
  7. Network Mapping for DR
  8. Bandwidth Limit on Replication Schedule
  9. VM Flash Mode
  10. Support for Minor Release Upgrades for ESXi Hosts
  11. Block Fault Tolerance Enhancement
  12. Erasure Coding
  13. Windows Guest VM Failover Cluster
  14. Acropolis Volume Management
  15. Self-Service File Level Restore [Tech Preview]
  16. Foundation 3.0
  17. Prism Central for Acropolis Hypervisor


Acropolis Base Software

  • VM High Availability in AHV
    Virtualization management ensures that at most one instance of the VM is running at any point during a failover. Virtualization management VM high availability may implement admission control to ensure that in case of node failure, the rest of the cluster has enough resources to accommodate the VMs. There has been some changes to Acropolis HA since Tech Preview and now configuration complexity is completely abstracted and hidden from users. HA will auto-restart VMs in a priority defined order after a host failure to reduce downtime for critical applications. High Availability is on as best effort by default, and Acropolis will enforce N+1 to reserve resources.


Screen Shot 2015-09-25 at 4.47.26 PM


  • Cross Hypervisor VM Mobility
    Cross hypervisor VM mobility provides an option to automatically export existing VMs from vSphere clusters to an Acropolis Hypervisor (AHV) cluster. In order for the VM to be “cross-hypervisor” DR capable, administrators will need to install the Nutanix Guest Tools that will “prep” the source VM by installing the drivers required at the destination.


  • Default Container and Storage Pool Upon Cluster Creation
    When using Acropolis hypervisor, the administrator shouldn’t have to care about storage. They should go from creating the cluster to creating VMs right away with no intermediate steps. As of this release when administrators create a new cluster, the Acropolis base software automatically creates a container and storage pool.


  • CommVault IntelliSnap Integration
    Cluster level API integration with CommVault allowing the backup tool to directly
    manage the storage tier for snapshots and backups. This give Nutanix customers the ability to ensure the entire datacenter is backup managed by a single tool; in Nutanix and non-Nutanix deployments using CommVault.


  • Cloud Connect for Azure
    The cloud connect feature for Azure enables you to back up and restore copies of virtual machines and files to and from an on-premise cluster and a Nutanix Controller VM located on the Microsoft Azure cloud in addition to AWS. Once configured through the web console, the remote site cluster is managed and monitored through the Data Protection dashboard like any other remote site you have created and configured.



  • Hyper-V Configuration through Nutanix Web Console
    After performing foundation of the cluster with Hyper-V, administrators can use Nutanix PRISM to join the hosts to the domain, create the Hyper-V failover cluster, and also enable Kerberos. In AOS (Previously named NOS) 4.5, we’ve significantly improved the Hyper-V setup process. This video goes over the post-installation Hyper-V configuration steps in 4.5! Thanks to Chris Brown for recording this video.


  • Network Mapping for DR
    Network mapping allows users to control network configuration for the VMs when they are started on the remote site. By using network mapping feature, you can specify network mapping between the source cluster and the destination cluster. The remote site wizard provides you with an option to create one or more network mappings and allows you to select source and destination network from the drop-down list. You can also modify or remove network mappings as part of modifying the remote sites.



  • Bandwidth Limit on Replication Schedule
    The bandwidth throttling policy provides administrators with an option to set the maximum limit of the network bandwidth during backups and replication. It is possible to specify the policy depending on the usage of your network.



  • VM Flash Mode
    VM Flash Mode enables administrators to select the storage tier preference for a particular virtual disk of the virtual machine that may be running some latency sensitive mission critical applications. This new feature enables SSD performance in a Hybrid System. I wrote about VM Flash Mode here if you are interested in more details.




  • Support for Minor Release Upgrades for ESXi Hosts
    Acropolis base software 4.5 enables administrators to patch upgrade ESXi hosts with minor release versions of ESXi host software through the Controller VM cluster command. Nutanix qualifies specific VMware updates and provides a related JSON metadata upgrade file for one-click upgrade, but now customers can patch hosts by using the offline bundle and md5sum checksum available from VMware, and using the Controller VM cluster command. Note: Nutanix supports the ability to patch upgrade ESXi hosts with minor versions that are greater than or released after the Nutanix qualified version, but Nutanix might not have qualified those minor releases.


  • Block Fault Tolerance
    Referred to as block awareness in previous releases, block fault tolerance is the Nutanix cluster’s ability to make redundant copies of any data and place the data on nodes that are not in the same block. When certain conditions are met, Nutanix clusters become block fault tolerant. Block fault tolerance is applied automatically when:

    • Every storage tier in the cluster contains at least one drive on each block.
    • Every container in the cluster has replication factor of at least 2.
    • There are a minimum of three blocks in the cluster.
    • There is enough free space in all the tiers, in at least replication factor number of blocks in the cluster. For example, if the replication factor of containers in the cluster is 2, then at least two blocks require free space.
    • Erasure coding is not enabled on any container.


  • Erasure Coding [Removed from Tech Preview]
    Erasure coding (EC) is a method of data protection in which data is broken into fragments, expanded and encoded with redundant data pieces and stored across a set of different locations or storage media. Erasure coding is extensively used in data centers since they offer significantly higher reliability than data replication methods at much lower storage overheads. Erasure coding is broadly applicable, but is especially relevant in large clusters with mission critical data, opting for RF3 configured resiliency. Read more here.



  • Windows Guest VM Failover Cluster [Removed from Tech Preview]
    Support for configuring Windows guest VMs as a failover cluster. This clustering type enables applications on a failed VM to fail over to and run on another guest VM on the same or different host. This NOS release supports this feature on Hyper-V hosts with in-guest VM iSCSI and SCSI 3 Persistent Reservation (PR).


  • Acropolis Volume Management [Removed from Tech Preview]
    The Acropolis Volumes API exposes back-end NDFS storage to guest operating system, physical hosts, and containers through iSCSI. This allows any operating system to access Nutanix DSF (Distributed Storage Fabric) and leverage it’s storage capabilities. Read more here.


  • Self-Service File Level Restore [Tech Preview]
    File level restore feature allows virtual machine users to restore a file within a virtual machine from the Nutanix protected snapshot without Nutanix administrator intervention. The feature work by mounting a given snapshot as a local drive inside the virtual machine allowing users to copy/restore any given file. I will provide more details on this new feature.


  • Foundation 3.0
    The Nutanix Foundation is the tool that allow administrators to completely bootstrap, deploy and configure a bare-metal Nutanix cluster from start-to-end with minimal interaction in matter of minutes. This release of Foundation is integrated into PRISM and allow administrators to completely drive operations like add-node from within PRISM.


Prism Central

  • Prism Central for Acropolis Hypervisor (AHV)
    Nutanix has introduced a Prism Central VM which is compatible with AHV (Acropolis Hypervisor) to enable multicluster management in this environment. Prism Central already supported all three hypervisors, but now it can also be run on all three hypervisors: AHV, Hyper-V, and ESXi.


  • Prism Central Scalability
    The Prism Central VM requires these resources to support the clusters and VMs indicated in the table. With the default memory configuration Prism Central VM can support 50 clusters and 5000 VMs (Assumption – every vm has an average of 2 virtual disks).

Screen Shot 2015-09-26 at 10.47.38 AM

For supporting 100 clusters and 10000 VMs the recommended memory capacity for Prism Central VM is 16 GB (Assumption – every vm has an average of 2 virtual disks). Virtual disk also needs to be expanded to 250G. Memory and virtual disk requirements can vary depending on the number of virtual disks per vm.

Nutanix Cluster Check

The Acropolis base software 4.5 includes Nutanix Cluster Check (NCC) 2.1, which includes a number of new checks and functionality. I will late post a article dedicated to NCC 2.1.



….and more… we still have another big release very soon!!!



This article was first published by Andre Leibovici (@andreleibovici) at myvirtualcloud.net

Older posts «

» Newer posts