«

»

Cache – The Datrium DVX Worst Case is Someone Else’s Best Case

A couple of times I have been asked about caching performance with Datrium DVX. It’s intuitive for people without an understanding of the technology to think that DVX performance is contingent entirely on the host cache and if nodes go down the caches would have to be rewarmed, or a full cache integrity checks would be necessary, like in ZFS with L2ARC. In the L2ARC case, performance could drag for hours until the cache is repopulated.

 

That is not the case with DVX…

 

In DVX, we hold data in-use on flash on the host. Moreover, we guide customers to size host flash to hold all data for the VMDKs. With always-on dedupe/compression for host flash as well, this is feasible – with just 2TB flash on each host and 3X-5X data reduction you can have 6-10TB of effective flash capacity. (DVX supports up to 16TB of raw flash on each host). Experience proves this is in fact what our customers do: by and large, our customers configure sufficient flash on the host and get close to 100% hit rate on the host flash.

With any array, you have to traverse the network. However, with any modern SSD, the network latency can be an order of magnitude higher than the device access latency. Flash does belong in the host, especially if you are talking about NVMe drives with sub-50usec latency.

 

 

What about a host failure?

Because all data is fingerprinted and globally deduplicated, when a VM moves between servers there is a very high likelihood that most data blocks for similar VMs (Windows/Linux OS, SQL Servers, etc.,) are already present on the destination server and data movement will not be necessary for those blocks.

 

Flash-to-Flash

DVX also uses a technology we call F2F (flash-to-flash)– the target host fetches data from other hosts (various) flash and moves the data over to the destination host if necessary. DVX can read data from host RAM, host flash, or Data Node drives (or, during failures, from Data Node NVRAM). DVX optimises reads to retrieve data from the fastest media. You lose data locality for the period during which this move happens, but it is restored reasonably quickly.

 

VM Restart and VMotion

In the uncommon case, i.e., after a VM restart on a new host (or after a vMotion) and the data is not available in any of the hosts flash, the DVX performance will be more like a conventional array (or some other HCI systems without data locality). However, the DVX optimises the storage format in the Data Node drive pool for precisely this situation. VMs tend to read vDisks in large consecutive clumps, usually reading data that was written together. Large clumps of the most current version of the vDisk are stored together. These are uploaded to the host as a contiguous stream upon the request of any individual block, providing a significant degree of read-ahead as a vDisk is accessed. Subsequent reads of the same blocks, of course, will be retrieved from local flash rather than from the Data Node.

 

Peer Cache

Furthermore, DVX also has a critical feature called ‘Peer Cache‘ that adds enhanced resiliency to the platform and protect applications even is the all local flash device fails.

 

 

 

The DVX worst case is someone else’s best case.

 

This article was first published by Andre Leibovici (@andreleibovici) at myvirtualcloud.net.

 

Leave a Reply