Sep 29 2014

What is your Organization’s Cloud Strategy?

Advertisement

Few recent events have marked the public cloud market with bad press. These occurrences had me thinking about enterprise cloud strategies and the importance of being open to disruption.

The most recent events are the Microsoft Azure outage and the Amazon and Rackspace full reboots, but there are many more events like these if we do a quick news search.

The fact that public cloud services are still not able to provide highly stable, available and secure platforms is something that enterprises using those services cannot afford for the most part.

Yesterday during Oracle Open World, Intel president said that 75% of all chips sold last year were for public cloud/web-style services and 25% for private enterprises, clearly demonstrating a global tendency.

On the other hand I am starting to notice enterprise complaining about the high prices charged by public cloud services; and few of these organizations have already started moving some of their workloads back in-house. However, it’s important to notice that this time around many of these workloads migrating to enterprises datacenters have actually never been hosted in-house because they were initiated and first powered-on at public clouds. The shift is not as dramatic as some have reported, but there’s an undeniably movement towards in-housing some workloads, particularly the ones that do not require public facing Internet or are not built with web scalability in mind.

 

However, public clouds are evolving and innovating at an amazing pace; as well as dropping their prices. Microsoft Azure was not here two years ago and today it has comparable offerings to Amazon and yet is the only one of the big three players with an on-premise offering. Google has also expanded their offerings with a number of different platform services.

I recently discussed with a friend working for [[redacted]] and he explained to me how they are locked into the Amazon S3 APIs and how they wish they were unrestricted to move their service to a different public cloud without major disruption.

Users fear cloud breaches are more expensive (InfoWorld)
What’s Right For Your Business? Private, Public, or Hybrid Cloud? (Forbes)

 

All this choice, transformation and disruption in the industry can be overwhelming to many organizations that are just trying to run their core workloads while trying to figure out the best way to do so.

The gigantic problem is that enterprise workloads running on-premises today maybe better served by Amazon tomorrow, or even by a niche smaller cloud services provider in the near future. Perhaps organizations may have requirements to deploy FIPS compliant systems that will need to be accommodated in-house. Moreover, if today’s on-premise virtualization standard is VMware vSphere, tomorrow your organization may decide to adopt Hyper-V or KVM for cost savings and future direct integration with cloud services.

The reality is that we are heading towards a hybrid cloud world, where workloads shall move around for different business, technology and compliance reasons. This should not be new to anyone at this point in time.

 

Conclusion

When I think about enterprise workloads I see them confined to their walls today, be on-premises or off-premises, and this is certainly going to hinder organization’s abilities to be agile and competitive in fast evolving economies where cost and responsiveness are a fundamental factor for success.

If you are an IT or executive leader it’s time to start thinking about workload mobility and system platforms that provide methods and features to enable workload migration from on-premises to any public cloud, from any public cloud to on-premises, and even facilitate the movement between public clouds when necessary.

Systems that enable this type of workload migration (any-to-any) must streamline and facilitate data movement for most types of data transition, including backup and disaster recovery between platforms and cloud services.

From my point of view, only platforms and solutions that allow for real choice of cloud services will endure and truly enable organizations for the next cycle of cloud innovation.

 

This article was first published by Andre Leibovici (@andreleibovici) at myvirtualcloud.net

Permanent link to this article: http://myvirtualcloud.net/?p=6649

Sep 28 2014

How to enable Nutanix De-duplication per VMDK/VHD/RAW

The Nutanix performance tier de-duplication allows the sharing of VM data on premium storage tiers (RAM and SSD). Performance of guest VMs suffers when active data can no longer fit in these premium tiers. Nutanix also provides de-duplication for the capacity tier (HDD) for greater VM density.

Nutanix de-duplication is 100% software defined, with no controllers or hardware crutch; and this feature is available for all supported hypervisors (vSphere, Hyper-V and KVM).

The capacity tier de-duplication is a post-process de-duplication, meaning the common blocks are aggregated according to a curated background process; by default every 6 hours, while de-duplication in the performance tier is real-time, meaning it happens as data blocks transverse RAM or Flash. This hybrid de-duplication approach allows Nutanix Controllers to be less intrusive and utilize less CPU cycles to detect common data blocks.

To learn more about the de-duplication process read Nutanix 4.0 Hybrid On-Disk De-Duplication Explained.

 

Via the PRISM UI the on-disk capacity de-duplication is enabled on a per Container basis. However, it is also possible to enable and disable de-duplication per VMDK, VHD and RAW (vDisk) using the NCLI. That could be very useful when you want to de-duplicate a set of VMs, but not the entire datastore.

 

First identify the vDisk to be de-duplicated using : ncli> vdisk list

 

To edit the vDisk properties you will use vDisk edit with the name identifies in the previous command.

 

Please note that to enable on-disk-dedup, fingerprint-on-write must also be turned on and only fingerprinted data is considered for post-process de-duplication during curator full scans. Background dedup jobs are scheduled in a rate limiting fashion and do not impact cluster performance.

 

This article was first published by Andre Leibovici (@andreleibovici) at myvirtualcloud.net

Permanent link to this article: http://myvirtualcloud.net/?p=6643

Sep 22 2014

Nutanix Cluster Health 1.2 Released Today

Web-Scale and converged platforms that support large-scale deployments must embrace and provide architectural considerations for no single point of failure or bottleneck for management services. Tolerance of failures is key to a stable, scalable distributed system, and ability to function in the presence of failures is crucial for availability.

Some Nutanix customers are running just three nodes, but others are running thousands of nodes and at that scale automation is fundamental in being able to manage clusters, datacenters and operations properly. More importantly it is crucial that platforms that operate at that scale be able to identify potential conditions and problems before they affect resiliency or availability.

I have previously introduced NCC, but let’s re-cap. NCC is a framework that serves as the engine for the Nutanix Cluster Health. It consists of various modules and plugins. The modules are groups of plugins that correspond to a specific test category. NCC can be run as long as individual nodes are up, regardless of cluster state. The scripts run standard commands against the cluster or the nodes, depending on the type of information being retrieved.

 

Screen Shot 2014-09-21 at 5.31.59 PM

 

If you want to learn more about NCC read NCC 1.1 is available for download Now and it boosts Nutanix Cluster Health.

In every NCC release, Nutanix adds new plugins/checks to detect anomalies in the cluster and report any issues. Today Nutanix is releasing NCC 1.2, which include the following features/enhancements:

 

  • Microsoft Hyper-V hypervisor support
  • Nutanix OS (NOS) versions 3.5.3, 3.5.4, 4.0.1, and 4.02 only and includes a new installer to help you upgrade your NCC version
  • Nutanix Models NX-1020/1050, NX-2000/2050, NX-3000/3050/3060, NX-6060
  • Dell XC Series of Web-scale Converged Appliances
  • New checks help you to proactively test your cluster, ensuring your environment is operating optimally

 

Find out more about NCC 1.2 and download it at Nutanix support portal at portal.nutanix.com.

This article was first published by Andre Leibovici (@andreleibovici) at myvirtualcloud.net.

Permanent link to this article: http://myvirtualcloud.net/?p=6633

Older posts «

» Newer posts