Apr 27 2015

Tech Reasons why Nutanix leads the VDI industry

It has been a long time ago since I explored Nutanix technology with VDI lens. Features have been added to the platform since my last article on the subject, but Nutanix is still unquestionably my infrastructure of choice for VDI; not because I work for Nutanix, but because there are very good reasons for that.

Since this is a high level article, to some extent even considered marketing by some of my readers, I am associating each topic with a technical post that provide deeper understanding of the technology, features and architecture.

 

Performance

 

  • Data Locality

Nutanix uses a shared-nothing distributed architecture that ensures that desktop data is always replicated across SSD, HDD, servers and racks for high availability. A desktop will access data from anywhere on a Nutanix cluster, but Nutanix always ensure that active data belonging to a desktop are always hosted on the server where the desktop is running. This process is transparent and occurs in the background using free CPU cycles.

Data Locality is a key performance enabler for VDI, always ensuring important desktop and user data are located as close as possible to memory and CPU, avoiding multiple network hops.

 

  • Shadow Cloning

Shadow Cloning intelligently analyze the desktop I/O pattern at the storage layer and identify disks shared in read only mode (Linked Clone Replica or MCS private image). When a read only disk is discovered Nutanix converts this desk to immutable and automatically create copies of the disk in each server of the cluster ensuring all read I/O are local to the server where the desktop is running.

Shadow Cloning is a key performance enabler for application virtualization solutions such as VMware AppVolumes.

 

Some of the Shadow Cloning benefits are:

  • Nutanix does not require VMware’s CBRC (Content Based read cache) and is not limited to 2GB RAM like CBRC.
  • Reduced storage network overhead as read I/O is serviced locally, which ensures the lowest network congestion and latency; and best performance.
  • During boot storms, login storms and antivirus scans all data is serviced locally and NO read I/O is forced to be served by a single storage controller or server. This not only improve read performance but makes more I/O available for the write I/O operations; which are generally >=65% in VDI.
  • The solution scale while maintaining linear application performance. Performance does not taper off at scale.
  • When the base image is updated, Nutanix automatically detects the change and restart the shadow cloning process.

 

  • Performance Deduplication

The Nutanix deduplication engine performs inline deduplication in the performance tiers (RAM and SSD), and post-process deduplication of high-yield candidates in the capacity tier, optimizing across both performance and capacity tiers without impacting foreground operations.

Nutanix is a 3-tier architecture platform, and uses RAM and SSD for performance optimization. This combination provides access to constantly accessed data in terms of microseconds, instead of milliseconds when just SSD is used. This directly influence and enhances the end-user experience.

The deduplication engine is designed for scale-out, providing near instantaneous application response times. Because Nutanix is platform agnostic, this feature is also available in any hypervisor you choose to work with.

With data being deduplicated in RAM and SSD similar desktops do not have to compete for data placement in cache because most desktops in a VDI deployment are essentially the same and contain similar data. In the VDI context this means that desktops can be deployed without capacity or performance penalties commonly existent with other traditional storage or hyper-converged solutions.

 

  • View Composer Array Integration (VAAI) and Offloaded data transfers (ODX)

VCAI is part of the vSphere vStorage APIs for Array Integration stack and allow administrators to take advantage of the Nutanix native snapshot and cloning features within the usual administrative workflow of Horizon View with View Composer or XenDesktop with Hyper-V.

The use of this features help to reduce the time taken to provision desktops. When desktops are created the operation is offloaded to Nutanix controllers. Nutanix controllers handle operations such as snapshot creation and clone creation, drastically cutting down provisioning times and capacity requirements.

VAAI and ODX also facilitate Nutanix intelligent cloning, not allowing the storage controllers to process duplicate data in the first place. So based on this, desktops which are intelligently cloned are not deduped because duplicate data is never written or processed.

 

Capacity

 

  • Capacity Deduplication

Map Reduce technology is used for post-process deduplication, and it enables intelligent selection of data candidates that deduplicate well. This allow Nutanix to achieve savings without bloating metadata unnecessarily. Data candidates with low or no matches are not deduplicated.

By avoiding metadata bloat due to non-dedupable candidates more of the RAM and SSD resources are made available for caching, resulting in optimal use of resources in the storage controller. In effect, the Nutanix is capable of making intelligent cost-benefit decisions.

 

  • Compression

Nutanix allow the creation of compression enabled containers, and as data is created by users and written to disks the Nutanix compression automatically compress data at the capacity tier for data that is no longer in active use.

Nutanix compression increases the usable capacity across storage tiers for user data, eliminating the capacity bottleneck and effectively enabling organizations to employ persistent desktops. Tests have demonstrated capacity reduction of up to 75% for the user data footprint in VDI deployments.

 

Scalability

 

  • Linear and Granular Scalability

Compute and storage scale independently via the use of CPU and storage-heavy nodes in the same cluster. However, what really matters is that organizations are not required to procure pricey infrastructure solutions on day one if the initial VDI deployment is only catering for a small number of users. The Nutanix cluster will grow linearly node-by-node with predictable performance as your VDI implementation grows overtime.

None of the monolithic storage solutions on the market are able to provide this linear scale-out approach with such granular and cost-effective scaling increments.

 

  • High Consolidation

Nutanix VDI consolidation ratios are significantly higher than other hyper-converged infrastructures due to the performance and capacity enhancements employed by Nutanix. Tests have demonstrated that Nutanix can deliver 73% more desktops on a Nutanix cluster when compared to other hyper-converged solutions.

 

Disaster Recovery

 

  • Full Clones

Nutanix provides native asynchronous and synchronous VM-Centric replication, automatically registering and powering on desktops on the destination vCenter or SCVMM, making them available for use in the recovery site. When the recovery event is done Nutanix applies all data block changes back to the primary site and re-initiate the desktops.

The replication uses incremental fine-grained byte-level data transfers with intelligent data compression, eliminating network and storage resource throttle. At the end of the day this means cost and time-saving for organizations.

Nutanix enables complete fail over of VDI deployments to a secondary site and at later stage fallback with the newly generated data to the primary datacenter.

 

  • Linked Clones

Nutanix has a complete understanding of the Horizon View Composer and Citrix MCS intricacies and is able to backup/restore and replicate Linked Clone desktops to a secondary site. Additionally, when in recovery mode, it is possible to power on those desktops (Nutanix automatically register desktops with vCenter or SCVMM in the recovery site) and make use of them. When the recovery event is over changes are replicated back to the primary site and life returns to normal.

Desktops are not the only resources needed when in recovery mode; you will also need Connection and Security Servers, Active Directory, SQL or Oracle Databases. All components, if not already available in the recovery site, can also be replicated and made available for use.

 

Ease of Management

Nutanix PRISM Central are the single-stop shop for every Nutanix administrator. PRISM Central consolidates all Nutanix clusters across all data centers into a single intuitive user interface that combines information about storage, hypervisors and desktops; single-pane-of-glass to manage multiple data centers.

PRISM Central avoid administrators from having to sign individually to every cluster while providing aggregated cluster health, alerts and historical data. Administrators are effectively able to manage all Nutanix clusters from the same UI.

Nutanix PRISM Central Demo Video (multi-datacenter management)

 

Conclusion

I am not claiming that Nutanix has more VDI seats than traditional storage architectures (SAN). VDI and RDSH are not new concepts and have been in use for many years. Hyper-convergence and Nutanix simplify the deployment process; yet providing better consolidation ratios, scalability and performance. IT administrators now understand the benefits, and the pendulum is quickly swinging in favor of hyper-converged architectures and Nutanix.

 

For more technical content on Nutanix features and architecture go to the NutanixIndex or the NutanixBible.

 

This article was first published by Andre Leibovici (@andreleibovici) at myvirtualcloud.net

 

Permanent link to this article: http://myvirtualcloud.net/?p=7032

Apr 08 2015

Are PoC and Pilot no longer required for VDI deployments?

Back in 2011, in the world of SAN, I wrote an article called “VDI USER EXPERIENCE and USER ACCEPTANCE”. The article iterated through my experiences reviving failed VDI deployments due to poor technical design; mostly driven by storage performance complications.

In my article I explain that the User Experience is the interaction between users and the desktop interface, in this case Windows and the applications. User Experience can only be measured by the user perception in contrast to a known baseline. User Experience is not a metric that can be collected and analyzed.

Metrics such as CPU utilization, storage latency and network contention do not provide a user experience index. As an example, network bandwidth utilization could be low but display performance is sub-optimal due to a mis-configured display protocol. There are tools that provide in-guest analytics, but offer limited view to the wider user environment spectrum – but I still say it’s good to have them.

I then explain User Acceptance, being a consequence of a good User Experience. If the User Experience isn’t great there will be no User Acceptance, and here I find the most dramatic consequences of a not well-designed solution. If users, in their very first interaction with the new VDI environment are somewhat disappointed, it will cost time, effort and sometimes money to re-convert them; and just like in any market, if users are happy they spread the word, and if they are unhappy they will also spread the word.

yada, yada, yada… then I talk about the importance of never overlooking the VDI pilot and Proof of Concept (POC) phases. All good so far, and we all agree that POC and Pilot are essential for successful VDI rollouts.

 

 

What has changed?

Since 2011 hyper-convergence has emerged and matured, removing many of the complex aspects required to rollout successful VDI solutions.

The inexistence of dual-controllers like in centralized storage (SAN) completely removed the need to validate storage architectures under full load. The distributed nature of hyper-converged solutions warrants testing a single server under load to be enough to characterize an entire VDI deployment, independent if 100 or 100K users. You will know that for a given user profile you are able to have n users per server.

That also means that there aren’t calculations required for the number and types of disks, RAID groups, LUNS or controller overload in case of failures. Hyper-convergence implements an independent virtual controller per server; and in case of a server component failure only the desktops in that server are affected, not the entire fleet.

Finally, hyper-convergence excels when VDI deployments need to grow, not requiring complex math exercises to calculate storage performance and capacity expansions. Remember, you already know how many users of a given profile the server will support; simply add servers to the existing cluster.

Back to the POC and Pilot phases, they are still essential and you should never skip them. However, they are now much easier to be attained with a little help of hyper-convergence.

 

Last but not least. Make sure you select the VDI solution and supporting technologies that cater for your users need; make sure products will have longevity and have right fit for your organization. VDI can be very complex and hyper-convergence make it simpler, but look for vendors that are willing to help you with the process to augment your chances of being successful.

 

This article was first published by Andre Leibovici (@andreleibovici) at myvirtualcloud.net

 

Permanent link to this article: http://myvirtualcloud.net/?p=7016

Apr 05 2015

VDI Calculator v6.2 is Now Available w/ Haswell support

Today I am announcing the general availability of the new VDI Calculator v6.2. This version introduces support for the Xeon E5-2600 V3 family of processors, based on Haswell microarchitecture, and Configure To Order (CTO) configurations for the Nutanix NX and Dell XC series.

 

  • Haswell E5-16xx/26xx v3 processors – Haswell is the successor to the Ivy Bridge microarchitecture, which is the de facto processor in all Nutanix hardware platforms today. The VDI calculator now accepts Haswell E5-16xx/26xx v3 processors up to 18 cores. Please note that Haswell configurations can also be used for traditional SAN (3-tier) deployments. According to Intel, “From a performance perspective we are delivering worldwide performance levels, tripling performance [thanks to] the Xeon V3’s 18 cores [that] offer a 50 percent increase over the prior generation”. On Nutanix NX and Dell XC the storage performance improvements are visible as you can see in the graph below.

 

 

 

During tests with LoginVSI Knowledge worker profile we are seeing a 15 to 25% performance improvement coming from the Haswell processor, but also from the improved memory speed. On a Nutanix NX 3460 (2x10c) we now see around 125 2vCPU desktops with the aforementioned LoginVSI profile. Despite tests, I would strongly recommend a proper assessment to define the resources required by your organization’s VDI deployment.

 

  • Configure To Order (CTO) – Both Nutanix and Dell now provide support for Configure To Order (CTO). With this, you have the ability to right size your hardware platform to exactly meet the need of your workloads by choosing the right amount of compute, memory and storage that your VDI solution need. Given that the configuration is now flexible I have removed the rigid selection for specific Nutanix and Dell models, opening up for choice. Please make sure you verify the available models with Nutanix or Dell before setting a specific ‘Socket per Host’ and ‘Cores per Socket’ configuration.

 

 

To access the new VDI calculator click here.

 

This article was first published by Andre Leibovici (@andreleibovici) at myvirtualcloud.net.

 

 

 

Permanent link to this article: http://myvirtualcloud.net/?p=7010

Older posts «

» Newer posts