«

»

Jan 16 2014

Chasing Data Center Efficiency with Hyper-Convergence

In recent times a lot has been deliberated on Convergence and Hyper-Converged infrastructure topics, but these topics are not new for large-scale web companies. Years ago large-scale web companies started to face pressure from business to deliver better service agility at lower costs. However, Tech Ops and IT departments were always entrenched into complex system configurations, domain-specific skill sets and lack of resource elasticity, effectively under-delivering on business needs.

  
Going back in time… in early 2000’s VMware revolutionized enterprise IT allowing enterprises to pool, aggregate and distribute X86 computational resources, albeit CPU and memory, with added management features. This allowed enterprises to achieve better hardware utilization ratios, better management capabilities, lower CAPEX and OPEX, higher availability, reduced time to market and greater agility.
  
The rest of the datacenter, albeit networking and storage, were not part of this datacenter revolution and for many years they have been complex and treated as silo’s of infrastructure, creating ever increasing dependencies on specialized hardware and specialized skill sets.
  
In 2003, Google engineers designed the Google File System (GFS) a scalable distributed file system for large distributed data-intensive applications with focus on fault tolerance and high aggregate performance, while running on inexpensive commodity hardware and possibly creating the first ever hyper-converged infrastructure solution that would go beyond CPU and memory allocation and distribution.
  
Converged and Hyper-Converged Infrastructures bundle multiple parts of datacenter infrastructure into a single, optimized computing solution unit. Generally speaking these will include compute, storage, networking and software for management, automation and orchestration. These solutions allow enterprises to utilize a simple modular building block of commodity hardware that provide massive and linear scale-out architectures with predictable performance.

Nutanix is the most prominent vendor on the market with thousands of hyper-converged appliances shipped to all types of enterprises, including large-scale web companies. Nutanix uses the Nutanix Distributed File System (NDFS) that is based on the original GFS since Nutanix founders were part of the team that produced GFS. However, NDFS is a fully customized and optimized solution for virtualization and it’s performance and capacity requists.
  
Nutanix supports vSphere, KVM or Hyper-V as hypervisors, truly enabling the hybrid cloud in a multi-hypervizor brave new world. I’ll undoubtedly talk more about that in my next articles.

  
Nutanix provides a highly distributed shared nothing software solution that aggregates local HDD and SSD to deliver a high performing linear scale-out architecture. I will provide technical details in my next articles, but I would like to leave you with some of the amazing features packed into Nutanix and I will share them with you as I learn more about the solution.
  

  • Dynamic Cluster Expansion
  • Self-Discovery and Zero cluster downtime
  • Heterogeneous Hypervisors
  • Native Disaster Recovery and Replication
  • Compression
  • De-duplication
  • Software-defined High Availability with Auto-Pathing
  • VM-centric Snapshot and Clones
  • VMware API for Array Integration support
  • Rolling Updates with no downtime
  • … and other features that I don’t even know what they are yet!

  
Keep tuned for more articles on the role of hyper-convergence, and more technical articles too.
  
This article was first published by Andre Leibovici (@andreleibovici) at myvirtualcloud.net.