Oct 27 2015

Storage Portfolio Vendors want you to believe Silos are good

I have been meaning to write this article for a while, but a recent post from an employee at a storage portfolio vendor describing why organizations ought to desire to operate their IT infrastructure using siloed storage platforms made me deeply think about organizations’ realities, challenges and the requirements to deliver applications and services in todays’ competitive realities.

It’s no difficult to understand that diverse application types have different storage requirements around capacity, performance, uptime and resiliency. Traditionally the IT industry have dealt with these disparities using technologies and platforms created specifically to handle different types of workloads. Over the years the industry created specific storage platforms for low latency applications, storage platforms for high throughput, platform for high IOPs, deep and cheap storage platforms, object storage platforms etc. This list goes on and on… and sometimes even getting into industry specific; such as saying that the Banking vertical should use a certain type of storage or that Media/Entertainment industry should use a different storage solution for storing media files – even industry mantras have been created; completely ignoring the reasons behind these choices. It is not hard to understand the complexity and costs associated to managing a multitude of storage platforms along with the operational burden to maintain these environments.

There are inherent complexities when adopting a siloed approach, all the way from ordering, deploying and managing the infrastructure to the way how organizations have to budget for 3 to 5 years in advance and predict what their needs will be – as most IT teams know well, this is almost impossible nowadays. These silos are often detrimental to change and progress. Every new business initiative require buy offs from different teams within the datacenter. When it is time for upgrade, they are expensive and complex often times resulting in several hours or days of downtime. While this was acceptable a decade ago, business can no longer withstand such inefficiencies.

The industry is changing fast and while sellers will always be sellers, there is a new breed of companies solving this problem in a much more effective way.

Technology has evolved and better operational models are available – enter hyper-convergence. While legacy storage vendors may continue to say that they also have hyper-converged offerings, they are categorically portfolio companies that will try to sell you anything; in some cases, moving from platform to platform until they are able to fit the budget or business requirement. ABC too expensive? Try NYZ. No? We also have XEP.

Nutanix has no intent to sell anything other than hyper-convergence on-premises, off-premises, hybrid – but only hyper-convergence – and be the finest solution on the market. Our customers are running today the most demanding and diverse applications and workloads.


How does that work?


The hyper-converged approach ensures that diverse workload types are able run efficiently on the same exact platform or cluster using different types of nodes that have may implement distinctive compute and storage characteristics, but ultimately being part of the the same management solution or domain. Nutanix even allow administrators to manage distinct cluster in different datacenters using the same single pane of glass.


EMC consoles to manage different storage platforms


vs. Nutanix PRISM managing multi-cluster multi-datacenter



Amongst the features that enable these diverse applications to efficiently run on a Nutanix cluster are Read Caching, Data Locality, VM Flash Cache and the Asymmetric node types support on the same cluster, All-Flash nodes etc.

Extraordinaire VCDX Michael Webster summarized well some of the benefits when running business critical applications on Nutanix.


Invisible Infrastructure, Reduced Complexity, Reduced Risk: Deploy in minutes and expand on demand, upgrade non-disruptively / transparently at lunchtime, self-healing – no more nights and weekend callouts, focus on more business value adding activities. 

Consistent, Predictable Performance, Reduced Risk: Because of the way the Nutanix architecture is designed it provides much more predictable and consistent high performance initially (at deployment), but also as the environment grows. No longer does one system monopolies resources and performance to the detriment of other systems. Every workload gets its fair share of resources, performance and resiliency improves as the environment grows. Nutanix also provide much more predictability when components fail, and eliminate some of the negative performance impacts of component failure as every node, controller and disk participates to balance recovery operations. This is due to every node / controller and disk participating in recovery operations, so recover is balanced, consistent, and no single point for a bottleneck exists. Data Locality ensures that as the environment grows all workloads have access to the shortest IO path to their data, without adding unnecessary network congestion. 

Reduced Project Deliver and Production Defect Risk: One of the greatest benefits of virtualizing is greatly improving the efficiency of application release cycles and thereby revolutionizing the ability of the IT team to deliver business outcomes (while greatly reducing project timelines and project labour costs). With Nutanix, operations that used to take weeks to a month in a traditional environment (even a virtualized one) can now be done in minutes (without incurring heavy storage consumption due to Nutanix data efficiency techniques). Multiple independent similar applications can be deployed in such a way that they consume almost no more storage than a single environment. This makes Nutanix All Flash Platforms economics and performance attractive.


It’s better done than said and that’s why the entire Nutanix team is working hard with many enterprise software vendors to validate them on Nutanix platform. The list of officially validated enterprise critical workloads on Nutanix is rapidly growing and already include the likes of Microsoft SQL and Exchange with SVVP and MSRP (read this article by Josh Odgers on Nutanix MSRP certification), SAP (read this article by Michael Webster), Avaya, MEDITECH, Epic (solution brief), Citrix (land page), Splunk and many others. Nutanix customers are also running PACS, VNA and other Imaging archiving systems storing massive amount of data – and with the already announced Nutanix Native File Services this type of use-case will become even simpler. Check out our partner page here, you will be impressed.

Yes, there are very few specific corner cases where even an All-Flash hyperconverged solution won’t provide the required latency requirements across the entire active dataset; and here I am talking about very large databases with dozens of Terabytes and also very large active working sets; perhaps sometimes even with the entire dataset requiring sub-millisecond response times. This is certainly not the case for the large majority of workloads, but even so, Nutanix can be used as a scale-out SAN with iSCSI via the Volumes API, enabling those workloads to operate across a number of different storage controllers in multiple nodes and obtaining performance numbers that no dual or eight-controller storage solution is able to deliver.

Acropolis Volume Management – The Acropolis Volumes API exposes back-end NDFS storage to guest operating system, physical hosts, and containers through iSCSI. This allows any operating system to access Nutanix DSF (Distributed Storage Fabric) and leverage it’s storage capabilities. Read more here.

The lack of a powerful and yet simple solution that allow all types of enterprise applications to run and be managed in a much more effective and efficient way is eroding the traditional storage industry, and many researches have demonstrated that the phenomenon is accelerating. In saying that, not all Hyperconverged Solutions are created equal!


This article was first published by Andre Leibovici (@andreleibovici) at myvirtualcloud.net

Older posts «