Datrium 3.1 Features Overview (Beyond Marketing)

Three months ago we released the 3.0 version of our software (here) and announced a boatload of new features, including Red Hat and CentOS virtualization support, Containers Persistent Volumes, and an incredible 18 Million IOPS and 8 GB/s random write throughput. Then last month we announced Oracle RAC (here) with support for vSphere multi-writer VMDKs.

Today we are making another incredible product announcement, introducing release 3.1, providing Zero VM Downtime even when all drives in a host fail, Accelerating Performance-Intensive Workloads (even further), and Extending All Flash Beyond Primary Storage. Furthermore, we are updating our hardware platform, both data and compute nodes. (Press Release here)

If you are not yet familiar with Datrium and Open Convergence, at the 10,000-foot view we offer a back-end node for persistent data and backups (aka data node), and a front-end compute node where applications run with data locality and flash performance (those can also be your existing servers). Datrium is amazingly simple, without the complex set of dials and knobs that are commonly found in private cloud solutions.

 

  • Updated Compute Node CN2100
  • Updated Data Node D12X4B
  • The New All-Flash Data Node F12X2
  • IOmark Performance Benchmark
  • StorageReview.com Performance Benchmark
  • Zero VM Downtime
  • Peer Cache Protection

 

Let’s get to the announcements….

 

Updated Compute Node CN2100

As always you may bring your servers into the solution, but if you prefer you can use Datrium’s compute node with pre-validated performance numbers. The new hardware platform uses new Intel Skylake processors, adds NVMe support and also packs 2×25 GbE or 4×10 GbE.

 

 

Updated Data Node D12X4B

You may mix and match different data node generations as part of the same cluster. The new disk-based data node (internally codenamed as Arrow) is high performant and cost-optimized, offering up to 18 Million IOPS, 200 GB/s read throughput, 8 GB/s random write throughput, and ~100 TB of effective capacity. The new model also comes with 25 GbE support.

 

 

How does Datrium D12X4B look like, compared to…

  • To put into perspective, using a 70:30 read to write split ratio with 8K block sizes would give us a direct comparison to XtremIO. Using this configuration, DVX will do 3.3M IOPS; 3.7x better performance than the largest XtremIO. XtremIO Specifications (here).
  • You may compare with 32KB reads where DVX will deliver 6.25M IOPS; 17x better than the largest Pure Storage FlashArray. Pure Storage Specifications (here).
  • You may compare with nominal read IOPS, likely 4KB, where DVX will deliver 18M IOPS; 2.7x better than the largest EMC VMAX All Flash. EMC VMAX All Flash Specifications (here).
  • You may compare with nominal read IOPS, likely 4KB, where DVX will deliver 18M IOPS; 1.8x better than the largest SolidFire All Flash array. SolidFire All Flash Specifications (here).

 

 

The New All-Flash Data Node F12X2

For those highly intensive workloads with extreme random write operations requirements, we now offer All-Flash Data nodes (internally codenamed as Flarrow). All-Flash data nodes allow up to blazing fast 20 GB/s random write throughput when four or more nodes are clustered together – that’s 2X write throughput relative to the disk-based data node.

The ideal use cases include large IoT and Oracle RAC deployments, and where ultra-low and predictable latencies on cache miss, host failures, cold boots or under consistency degraded conditions are an absolute requirement. These beasts improve write bandwidth and extend host resilience.

Note that due to data locality most read IO operations happen at the compute nodes themselves using local SSD or NVMe. Therefore 200 GB/s read throughput with up to 128 servers remains the same as the disk-based performance.

The F12X2 offer 15TB usable, ~50TB effective capacity and enable you to add additional capacity quickly. The new model also comes with 25 GbE support.

 

 

Given the price point, the F12X2 combined with Datrium native data deduplication, compression, and inline erasure coding, now customers can efficiently extend the use of Flash beyond primary storage, yet providing the lowest possible latency using NVMe for primary and SAS/SATA flash for secondary – 16 GB/s random write throughput.

 

 

 

 

IOmark Performance Benchmark – Mind-Blowing 10x!

 

“Datrium is the most scalable, fastest and lower latency storage solution (converged or not) on the market, beyond doubt.”

 

We already knew that our solution is really competent, but there’s nothing like a 3rd party audited benchmark to convince even the most suspicious greybeards. The results are mind-blowing!

In partnership with Dell and IOmark.org, we have been able to validate that Datrium achieved 8,000 VMs on a single Datrium converged platform running 60 servers and 10 data nodes.

Until Datrium, the highest IOmark audited benchmark was the IBM V9000 AFA with 1,600 IOmark VMs, and the highest hyperconverged solution was VMware VSAN with Intel Optane SSDs achieving 800 IOmark VMs. Datrium delivers not only 5X more performance than the previous IBM record and 10x more performance than the previous hyperconverged record but also has the lowest latency across all audited platforms.

Read all about the benchmark results here.

 

 

StorageReview.com benchmark is coming too!!

We also reached out to the team at StorageReview.com and asked them to run one of their famous benchmarks. I won’t steal their thunder, so let’s wait for the verdict.

 

 

Zero VM Downtime

Another improvement with the 3.1 release is the ability to continue running VMs and applications at high performance on the host even when one, many or all local SSDs (Peer Cache protection) have failed. Datrium DVX can continue to serve I/O from flash-based secondary storage and from other hosts on the cluster with the performance necessary to keep mission-critical applications running until the VM vMotion to a new host, and the local caches are warm, or until the failed SSDs are replaced.

 

Peer Cache Protection

In DVX, we hold all data in use on flash on the host. Moreover, we guide customers to size host flash to hold all data for the VMDKs. With always-on dedupe/compression for host flash as well, this is feasible – with just 2TB flash on each host and 3X-5X data reduction you can have 6-10TB of effective flash. (DVX supports up to 16TB of raw flash on each host). Experience proves this is in fact what our customers do: by and large, our customers configure sufficient flash on the host and get close to 100% hit rate on the host flash.

However, in most instances due to data reduction benefits, customer decide to have only 1 or 2 flash devices on each server, because that’s more than enough from a capacity and performance standpoints. With previous releases of DVX if the last available flash device failed the workload would then stop, and applications would have to be restarted, manually or via HA, on a different host.

With DVX 3.1 we are introducing the ability to utilize Peer Cache, the flash devices from other hosts, to keep the workload running even if the last available flash device fails, and without impacting application performance until new SSDs are placed. As with any array, you now would have to traverse the network, and there would be some additional latency, but in this case, DVX would be working like any other SAN.

As with any array, you now would have to traverse the network for IO read operations, and there would be some additional latency given that we would be introducing East <-> West traffic for reads instead of being local. But in this case, DVX would be working just like a SAN.

 

This article was first published by Andre Leibovici (@andreleibovici) at myvirtualcloud.net

 

1 ping

  1. […] out Datrium 3.1 Features Overview (Beyond Marketing) where Datrium is crowned by IOmark as the most scalable, fastest and lower latency storage […]

Leave a Reply