While setting up couple AWS EC2 instances a few days ago, I noticed that despite the higher level abstraction and orchestration provided by cloud services I still needed to understand quite a lot about application behavior to properly stand-up a solution that would cater for my business. Luckily I was just playing around with some open-source software and the configuration did not matter that much, but it could have been very different if I was dealing with production systems and applications.
When setting up EC2 instances you need to provide networking information and lots of details about how storage will perform, including volume types, HDD vs. SSD, and the number of IOPS expected.
I know many infrastructure admins that would not even know where to start configuring these settings; not because they do not understand the features and metrics, but rather because they do not own the apps, they are not the DBAs, and also because they do not know what the requirements are on Day 2. The easiest way to mitigate complexity is to select best of breed for every storage option and pay the price.
On the other hand, HCI vendors sell themselves as on-premise datacenter simplification solutions, and in some sense, they are simplifying because they remove complex 3-tier SAN configurations (zoning, mask, LUNs, RAID, etc..). However when you look under the covers, you will notice that most HCI solutions on the market have been originally architected in a way to test market potential, and for almost all of them, enterprise data services have been implemented as an afterthought.
Truth be told…when data services are added as a bolt-on, as an afterthought, it becomes challenging to efficiently integrate new features and services in a meaningful way and eliminating complexity – and a quick look at HCI solutions on the market today will prove this point, where you will find that they have plenty of knobs and checkboxes for turning features On and Off.
I used this picture before in a diferent article, but I think it is priceless because it demonstrates very well the complexity of HCI solutions.
In the world of private and hybrid clouds, self-service portals, and higher level orchestration services, it does not make sense to ask users to identify, and in most cases make assumptions, about applications and data behavior. Users should not bother if the data is de-dupable, or RF2 vs. RF3 vs. Erasure Coding, or if compression delay should be 30 min or 60 min, or if checksumming, erasure coding and compression is to remain enabled for a given application.
If you thought that EC2 was complex at the beginning of this article now you are probably thinking that EC2 is a piece of cake compared to HCI solutions. We don’t worry about all that when using the public cloud, so why should we when using private clouds?
Datrium Open Convergence is more Cloud-Like than HCI
Datrium was built from the ground up to support data services, and they were designed to maintain services running in-Line, all the time, and yet providing the best resiliency, tolerance to failures, and durability – better than HCI and much better than EC2 – without making compromises to performance, stability, resiliency, or user experience. Just read this – Datrium is the most scalable, fastest and lower latency storage solution (converged or not) on the market, beyond doubt.
Datrium DVX has virtually no knobs that need to be adjusted or configured, and yet presents an extensive list of data services, such as Deduplication, Compression, Erasure Coding, Checksumming, End-to-End Encryption, Replication, Snapshotting, Cloning, Compression over Wire, etc. Also, there’s no choosing of Fast or Slow media – all data follow the application and will always reside on local attached Flash/NVMe for best performance.
From an implementation perspective, the file system always uses distributed Erasure Coding for reliable data protection against at least 2 simultaneous disk failures (or All but One server in a cluster), and the software stack uses no more than 20% of host CPU to deliver all data services, Always-On and Always In-Line. In a spirit of openness and truth, there is only 1 knob – the FIPS 140-2 Encryption. (btw – Only converged solution certified with FIPS 140-2).
Datrium allows for much better scalability, resiliency and performance than HCI and is simpler than EC2, and the architecture is a game changer in the datacenter space. If you are curious about how Datrium works under the covers, watch this video with Devin Hamilton and Alastair Cooke – it is excellent!
This article was first published by Andre Leibovici (@andreleibovici) at myvirtualcloud.net