View Storage Accelerator and View Storage Tiering [Unsupported]

I recently received a question from a colleague about the use of the View Storage Accelerator (also known as CBRC) not being supported when View Replica Tiering is enabled. He also pointed to an article published on the End-User Computing Blog that clearly states “Use of View Storage Accelerator is not supported when View Replica Tiering is enabled”.

I researched internally for the reasons behind the”’not supported’ statement and found two reasons:


  1. There is not much value add to use View Storage Tiering with View Storage Accelerator. Master image blocks being accessed from host RAM rather than from SSD did not add a whole lot of value during performance tests.
  2. VMware has not qualified the configuration to support it based on the logic in item 1 above.


The VMware View Tiering technology was created to allow View administrators to place replica disks on Solid State Disks. With the replica being serviced from high performing disk devices all Linked Clone desktops benefit from the IO performance and throughput, particularly during boot storms.

On another note, I have demonstrated in the past that during SteadyState operations the replica disks are not utilized very much (here and here).

With the introduction of VMware View Storage Accelerator (find more about it here) and having the most common blocks across all desktops in a host being serviced from RAM, the idea behind VMware View Tiering didn’t make sense anymore from a performance perspective and that’s the reason why both technologies have not been qualified together.

All that said, having the ability to use dedicated replica datastore via VMware View Tiering can be a huge storage capacity saving feature, in special for all the new all-flash based arrays where capacity utilization is far more critical than performance. When using Linked Clones with Dedicated replica datastore only a single replica is created per pool of desktops (up to 1000 desktops per pool). The picture below demonstrate the difference between the two models.




In the scenario above only 3 datastores are being used and the replica is approximately 25GB. Therefore, we have 75GB versus 25GB storage utilization when using dedicated replica datastore. This number could be worse depending on the number of desktops pools and datastores in use.

In some cases it could be a LOT worse depending on the configuration. The picture below demonstrate two desktop pools with two replicas in use. There are 16 replicas in scenario one against 4 replicas in scenario two when using dedicated datastore. 400GB vs. 100GB.




Another important note is that accessing SSD vs. RAM is equivalent to milliseconds vs. microseconds. Therefore, View Storage Accelerator during boot storms should perform faster having blocks with similar content located RAM.

The downside of View Storage Accelerator is the amount of host RAM being consumed (normally 2GB) to service common desktop blocks from RAM. This RAM consumption ultimately decrease the consolidation ratio. In a host with 1GB RAM desktops and TPS at 30% this could represent additional 3 desktops per host. Not many, but enough to make business decisions in large environment.

View Storage Accelerator will also utilize additional storage capacity to create the per-VM digest file using a SHA1 cryptographic hash function. The estimated size of each digest file is roughly 5 MB per GB of the VMDK size [hash-collision detection turned-off (Default)] and 12 MB per GB of the VMDK size [hash-collision detection turned-on].



Something else to consider when not using View Storage Accelerator is the amount of IO required from the replica disk during a boot storm. You all-flash based array may be able to drive 300K IOPS, but if the replica is effectively seating on couple SSD you will only be able to get performance from 2 drives, driving down performance. In this case, View Storage Accelerator would be a big plus. If you decide not to use View Storage Accelerator you should make sure the replica disks are spread across a large number of drives to increase total IO and throughput density.


Please note that this deployment type is not supported by VMware. I recommend testing in a development environment. If you decide to test or implement you are doing it on your own risk.


Now that you understand the pros and cons between both technologies you should know that there is no hard coding in VMware View that will prevent you from using View Storage Tiering and View Storage Accelerator together. VMware decided not to support because the value ad from a performance perspective was just not there according to tests.


This article was first published by Andre Leibovici (@andreleibovici) at

2 pings

  1. […] View Storage Accelerator and View Storage Tiering [Unsupported] I hope you enjoyed reading my articles as much as I did writing them for you. As we close the year I wish us all a relaxing and invigorating passage. I will be back in 2013 with renovated energy for another round of EUC, VDI, Virtualization etc. Thank you all! […]

  2. […] If you are interested in reading more about both technologies and how they complement each other please refer to my article View Storage Accelerator and View Storage Tiering [Unsupported]. […]

Leave a Reply