Sep 30 2014

Web-Scale IT: Are Enterprises Ready? Free webinar tomorrow.

I am honered to be part of debate on Web-Scale IT promoted by GigaOm Research along with Mike Karp, Vice President & Principal Analyst, Ptak, Noel & Associates and David S. Linthicum, SVP, Cloud Technology Partners. The panel is happening tomorrow Wednesday, October 1, 2014 (sorry the late notice) at 10:00 a.m. PDT.

By some estimates, within the next five years web-scale IT will be present in more than half of the world’s enterprises. Web-scale IT has enabled companies like Google, Facebook and Amazon to achieve unprecedented scalability, agility and simplicity. Can enterprises learn from the way some of the largest and most successful companies buy, deploy and operate their infrastructure?

This webinar will help enterprises understand the fundamentals of web-scale IT and how other companies are using web-scale principles to build private cloud-like environments.


What Will Be Discussed:

  • What is web-scale IT, where did it come from, and what does it offer?
  • What are the shortcomings of existing IT environments and how can web-scale IT address them?
  • What are the design goals, principles and characteristics of web-scale IT?
  • How does web-scale IT relate to other trends, like software-defined data centers, scale-out computing, hybrid cloud, DevOps and hyperconvergence?
  • How can IT infrastructure incorporate web-scale IT to meet the demands of cloud-like operation and delivery?

Who should attend:

  • CIOs
  • IT decision makers
  • Business strategists and decision makers
  • Cloud platform providers
  • Service provider executives
  • Data managers/developers
  • Enterprise software and technology vendors


Click here for registration and join us!


This article was first published by Andre Leibovici (@andreleibovici) at

Permanent link to this article:

Sep 29 2014

What is your Organization’s Cloud Strategy?

Few recent events have marked the public cloud market with bad press. These occurrences had me thinking about enterprise cloud strategies and the importance of being open to disruption.

The most recent events are the Microsoft Azure outage and the Amazon and Rackspace full reboots, but there are many more events like these if we do a quick news search.

The fact that public cloud services are still not able to provide highly stable, available and secure platforms is something that enterprises using those services cannot afford for the most part.

Yesterday during Oracle Open World, Intel president said that 75% of all chips sold last year were for public cloud/web-style services and 25% for private enterprises, clearly demonstrating a global tendency.

On the other hand I am starting to notice enterprise complaining about the high prices charged by public cloud services; and few of these organizations have already started moving some of their workloads back in-house. However, it’s important to notice that this time around many of these workloads migrating to enterprises datacenters have actually never been hosted in-house because they were initiated and first powered-on at public clouds. The shift is not as dramatic as some have reported, but there’s an undeniably movement towards in-housing some workloads, particularly the ones that do not require public facing Internet or are not built with web scalability in mind.


However, public clouds are evolving and innovating at an amazing pace; as well as dropping their prices. Microsoft Azure was not here two years ago and today it has comparable offerings to Amazon and yet is the only one of the big three players with an on-premise offering. Google has also expanded their offerings with a number of different platform services.

I recently discussed with a friend working for [[redacted]] and he explained to me how they are locked into the Amazon S3 APIs and how they wish they were unrestricted to move their service to a different public cloud without major disruption.

Users fear cloud breaches are more expensive (InfoWorld)
What’s Right For Your Business? Private, Public, or Hybrid Cloud? (Forbes)


All this choice, transformation and disruption in the industry can be overwhelming to many organizations that are just trying to run their core workloads while trying to figure out the best way to do so.

The gigantic problem is that enterprise workloads running on-premises today maybe better served by Amazon tomorrow, or even by a niche smaller cloud services provider in the near future. Perhaps organizations may have requirements to deploy FIPS compliant systems that will need to be accommodated in-house. Moreover, if today’s on-premise virtualization standard is VMware vSphere, tomorrow your organization may decide to adopt Hyper-V or KVM for cost savings and future direct integration with cloud services.

The reality is that we are heading towards a hybrid cloud world, where workloads shall move around for different business, technology and compliance reasons. This should not be new to anyone at this point in time.



When I think about enterprise workloads I see them confined to their walls today, be on-premises or off-premises, and this is certainly going to hinder organization’s abilities to be agile and competitive in fast evolving economies where cost and responsiveness are a fundamental factor for success.

If you are an IT or executive leader it’s time to start thinking about workload mobility and system platforms that provide methods and features to enable workload migration from on-premises to any public cloud, from any public cloud to on-premises, and even facilitate the movement between public clouds when necessary.

Systems that enable this type of workload migration (any-to-any) must streamline and facilitate data movement for most types of data transition, including backup and disaster recovery between platforms and cloud services.

From my point of view, only platforms and solutions that allow for real choice of cloud services will endure and truly enable organizations for the next cycle of cloud innovation.


This article was first published by Andre Leibovici (@andreleibovici) at

Permanent link to this article:

Sep 28 2014

How to enable Nutanix De-duplication per VMDK/VHD/RAW

The Nutanix performance tier de-duplication allows the sharing of VM data on premium storage tiers (RAM and SSD). Performance of guest VMs suffers when active data can no longer fit in these premium tiers. Nutanix also provides de-duplication for the capacity tier (HDD) for greater VM density.

Nutanix de-duplication is 100% software defined, with no controllers or hardware crutch; and this feature is available for all supported hypervisors (vSphere, Hyper-V and KVM).

The capacity tier de-duplication is a post-process de-duplication, meaning the common blocks are aggregated according to a curated background process; by default every 6 hours, while de-duplication in the performance tier is real-time, meaning it happens as data blocks transverse RAM or Flash. This hybrid de-duplication approach allows Nutanix Controllers to be less intrusive and utilize less CPU cycles to detect common data blocks.

To learn more about the de-duplication process read Nutanix 4.0 Hybrid On-Disk De-Duplication Explained.


Via the PRISM UI the on-disk capacity de-duplication is enabled on a per Container basis. However, it is also possible to enable and disable de-duplication per VMDK, VHD and RAW (vDisk) using the NCLI. That could be very useful when you want to de-duplicate a set of VMs, but not the entire datastore.


First identify the vDisk to be de-duplicated using : ncli> vdisk list


To edit the vDisk properties you will use vDisk edit with the name identifies in the previous command.


Please note that to enable on-disk-dedup, fingerprint-on-write must also be turned on and only fingerprinted data is considered for post-process de-duplication during curator full scans. Background dedup jobs are scheduled in a rate limiting fashion and do not impact cluster performance.


This article was first published by Andre Leibovici (@andreleibovici) at

Permanent link to this article:

Older posts «

» Newer posts