Hyper-Convergence is a Commodity. Now What?!

Years ago web companies started to face pressure from the business to deliver better service and agility at lower costs. However, Tech Ops and IT departments were always entrenched into complex system configurations, domain-specific skill sets and lack of resource elasticity, effectively under-delivering on business needs.

Going back in time… in early 2000’s VMware revolutionized enterprise IT allowing enterprises to abstract, pool, aggregate and distribute X86 computational resources, albeit CPU and memory, with added management capabilities. This allowed enterprises to achieve better hardware utilization ratios, better management capabilities, lower CAPEX and OPEX, higher availability, reduced time to market and greater agility.

The rest of the datacenter, albeit networking and storage, were not part of this datacenter revolution and for many years they have been complex and treated as silo’s of infrastructure, creating ever-increasing dependencies on specialized hardware and specialized skill sets.

In 2003, Google engineers designed the Google File System, a scalable distributed file system for large distributed data-intensive applications with focus on fault tolerance and high aggregate performance, while running on inexpensive commodity hardware and possibly creating the first ever hyper-converged infrastructure (HCI) solution that would go beyond CPU, memory allocation and distribution.

HCI bundle multiple parts of datacenter infrastructure into a single, optimized computing solution unit. Generally speaking these will include compute, storage, networking and management software. HCI allow enterprises to utilize a simple modular building block of commodity hardware that provide massive and linear scale-out architectures with predictable performance.

As with any evolving industry, technologies are replicated and when they achieve critical mass they are able to deliver similar benefits to customers; they become a commodity. Many people are certainly adamant to my observations, but it is clear that the hypervisor technology has been fully commoditized. Unquestionably some vendors have advantages over others in regards to features and performance, but the main characteristics about abstracting, pooling, aggregating and distributing X86 computational resources to virtual machines can be handled today by all hypervisors on the market. It’s up for customers to decide if they need all the additional benefits or go with a more streamlined and perhaps less expensive solution.



So, hypervisors are a commodity, but What about HCI and Software Defined Storage (SDS)?

Just as hypervisors have become a market commodity, systems and solutions that provide HCI and SDS will become a commodity too. This is a very busy market with many emerging players that will eventually enter in a so-called perfect competition achieving equilibrium when producer supplies meet consumer demand.

As always, there will be many HCI and SDS flavors; some with more features, performance or resiliency. Once again, it will be up to customers to decide the solutions they will run their datacenters on. It is also up to customers to decide where to and how to apply technologies that enable them to deliver the the correct SLA’s to support business requirements.


What’s next?

Nutanix understands the dynamics of this fast evolving market. Having created this market ourselves and being the clear leader with over 52% market share, according to IDC, we are not going to rest at our achievements. Despite Nutanix HCI solution is by far the most performant, resilient and feature-rich on the market today, long ago we have set our eyes on how enterprise datacenters will be built and managed for the decade to come.

During Nutanix conference in June we will reveal our next powerful act, beyond SDS and HCI; we call it Nutanix Confidential 2.0.

I often tell people that Nutanix has never been a storage company, but rather a distributed systems company. Nutanix DNA is very different from traditional storage companies with engineers coming from the large web-scale companies who have mastered distributed systems architectures long ago. It just so happens that Nutanix started delivering products for the storage layer because it made sense at that point in time.

Enterprises have today a huge need to simplify datacenter management and troubleshooting at any scale. This conference will be the perfect occasion to tell you more about it. All I can say for now is… distributed, distributed, distributed, analytics, analytics, analytics! It’s going to be incredible!!


If you are interested in how to architect the datacenter for the next decade I strongly recommend you to attend the conference. Find more information here.


This article was first published by Andre Leibovici (@andreleibovici) at myvirtualcloud.net

Leave a Reply