The View Storage Accelerator (aka VSA or CBRC) feature is part of vSphere 5 and later and is used by VMware View. VSA helps to address some of the performance bottlenecks and the increase storage cost for VDI. CBRC is a 100% host-based RAM-Based caching solution that helps to reduce read IOs issued to the storage subsystem and thus improves scalability while being completely transparent to guest OSs.
The VSA feature has been primarily designed to help with read-intensive IO storms, such as OS boot and reboot, A/V scans – and administrators effectively see significant reduction in peak read IO being issued to the array for desktop workloads. (here is a good article on View Storage Accelerator Performance Benchmark)
When VMware View Storage Accelerator is enabled (OS disk, or OS and Persistent Disks), a per-VMDK digest file is created to store the hash information about logical blocks in the VMDK disk. This information is stored using SHA1 cryptographic hash function.
In large VMware View deployments with many desktop pools and multiple datastores, the process to refresh/recompose pools can take a long time due to the CBRC requirement to generate digest files for all the new replicas. I have previously thoroughly discussed the relationship between View Storage Accelerator and View Storage Tiering at View Storage Accelerator and View Storage Tiering [Unsupported].
When View Storage Accelerator is in use each pool replica disk in each datastore for a given desktop pool will be hashed and have a digest file created. This process could add hours to the refresh/recompose operations. If the maintenance window set in VMware View is not long enough administrators could end up with desktops not being refreshed or recomposed.
One way to avoid the creation of multiple replica disks is to make sure the deployment uses the maximum number of desktops validated per datastore. Using less VMFS datastores or NFS exports will reduce the amount os replicas required to be created and hashed. I explain the Linked Clone behavior in my article VMware View 4.5 Linked Cloning explained.
If you are already in this situation where re-compose operations are going over the maintenance window, there are couple unsupported workarounds that will help you to reduce the time that it takes to create digest hash files for the replicas:
The steps outlined below are not supported by VMware. I recommend testing in a development environment. If you decide to test or implement you are doing it on your own risk.
- Add all the master VM’s of the various pools into a single manual pool
- Enable CBRC/View Storage Accelerator on the manual pool
- After the pool is created (the digests for these VM’s will have been created and will be on disk) remove the pool but preserve the vmdks
- Update the master VM’s with either a service pack or other update
- Snapshot the master images
- Recompose the pools
The workflow from 1 to 6 above cut down the recompose time on account of a decrease in the digest creation time. Steps 1 through 3 can be performed as a separate task before either provisioning a pool or an administrative operation such as recompose. This workaround can help to cut down the time taken for digest creation during the administrative operation on the pool.
Another possible unsupported, and perhaps simpler way, to solve the recompose issue is to ensure that there is only a single replica for each desktop pool. In this case we are entering another unsupported scenario where View Storage Accelerator is used in conjunction with View Storage Tiering, enabling the use of a Dedicated Replica Datastore.
Fortunately, I have recently discussed this very same subject at View Storage Accelerator and View Storage Tiering [Unsupported]. For this scenario, only a single replica would need to be hashed, reducing the total time required to prepare the environment to start Linked Clone operations.
For more information on VSA please refer to: