Samsung disrupts VDI and DaaS architecture models

It is just amazing when a new technology comes together to disrupt deployment and architecture models. I have previously stated that I am a supporter of persistent desktops, just because it is easier to manage, admins can use the tools they are familiar with, and make sense nowadays. Non-persistent desktops, linked clones, and alternative image and app management solutions are not a requirement for the vast majority of VDI and DaaS use-cases since now it is possible utilize fully provisioned persistent clones without the capacity and performance penalties due to the recent storage technology advancements. I recommend reading my article “Open letter to non-persistent VDI fanboys…”.

In saying that, a number of different deployment models may be utilized according to each individual use-case. In this article I want to focus solely on non-persistent desktops.

As far as I can tell and according to what I see on the market, Fusion IO and Virident are the two most popular integrated NAND flash PCI devices used for local host VDI deployments. Enterprises utilize NAND flash PCI to host a large number of desktops concurrently in a single host and provide high throughput and IOPs without requirement for a Storage Area Network (SAN). These storage technologies offer moderated capacity and high throughput and IOPs in a single bundled PCI card.

However, often times these PCI devices are not exclusively architected for VDI workloads. What happens is that these PCI devices often run out of capacity before IO operations and throughput.

To demonstrate that, I am creating a back of napkin calculation that demonstrate how these devices are not really suited for VDI. Take the Fusion ioDrive2 Duo 1.2Tb that costs approximately $13.9000,00 and will support approximately 250K random write IOps. Please note that the same will apply to all other PCI SSD.

Fusion IO Specs –
Fusion IO price List –


To design a solution without getting into the overly expensive server architectures that support up to 1TB RAM I will select a quad socket host with 8 cores per socket and will run 6 VMs per core at 400MHz average each. Assuming approximately 30% vSphere transparent page sharing (TPS), the server will require approximately 300GB RAM.

With this configuration it is possible to fit approximately 192 VMs per host, which each will enjoy a healthy simultaneous total amount of 1200 IOPs. That’s a lot of IO performance!

From a capacity perspective I have set for a maximum of 4GB delta file for each non-persistent VM that when summed up with vSphere vswap and log files will utilize a total of 1.28TB. This is a little bit more than the total capacity available in the PCI card (1.2TB), but as I say I am doing a back of napkin calculation.

If you are wondering how I am doing these calculations, please refer to my VDI calculator.

Therefore, from a capacity perspective this configuration is good, and for performance it’s fantastic!! However, I ask you… what is the total cost to get all this IO and throughput performance? There may be a reasonable use-case for that.

If you do have a use-case, please, by all means, utilize these remarkable pieces of technology. There are also other great solutions that deliver awesome user-experience like Atlantis ILIO, utilizing RAM to deliver up to 1 million IOPS per host at very microsecond latency.


Enter the new Samsung SSD 840 EVO. The new 2.5-inch Samsung SATA drive uses 19nm flash memory and boosts a total usable capacity of 1TB, and delivers up to 90,000 IOPs (still need to understand read/write ratios and real IOPs with VDI like workload). The IO performance is much lower than the aforementioned PCI SSD devices, but so is the price tag. The 840 EVO retails for ~$650.00.

These are consumer grade SSD drives, but I ask you –If your desktops are non-persistent and can be automatically re-generated in a different host and have Windows profiles re-loaded from a central NAS. Do you really care if they fail? Ok, maybe you do, but even if you add a second EVO drive for resiliency the solution would still be much cheaper than utilizing PCI SSD class-memory devices.

In this new scenario users will not get the amazing 1,200 IOPS per VM, but they will get a more modest ~400 IOps that is still a LOT of IOps per user. The demanding Windows interactive experience will still deliver amazing performance.

Next time you architect a non-persistent VDI/DaaS consider the total cost of the solution and the users performance requirements. VDI is becoming cheaper and some solutions are already pushing the total hardware costs below $150-120 without sacrificing user experience.


Disclaimer: Any views or opinions expressed in this article are my own, not my employer’s. The content published here is not read, reviewed, or approved by VMware and does not necessarily represent or reflect the views or opinions of VMware or any of its divisions, subsidiaries, or business partners.

This article was first published by Andre Leibovici (@andreleibovici) at

1 comment

1 ping

    • Derek on 08/08/2013 at 2:31 pm

    One issue with using consumer grade SSDs is that tier-1 servers (HP, Cisco, etc.) will not officially support such drives, and physically won’t connect without their proprietary drive carrier. Server firmware also may not recognize or properly use the drive even if you did manage to get it physically connected. Could be an option for white box servers.

  1. […] Entreprises utilisent NAND flash PCI pour héberger un grand nombre de postes de travail en même temps dans un seul hôte et offrent un débit élevé et Ops ES/s sans exigence pour un réseau SAN (Storage Area). Ces technologies de stockage offrent la capacité modérée et à haut débit et Ops ES/s dans une seule carte PCI groupée. Cependant, souvent fois ces PCI dispositifs ne sont pas exclusivement conçues pour les charges de travail VDI. Ce qui se passe, c’est que ces périphériques PCI souvent manquer de capacité avant les opérations d’e/s et le débit. Pour démontrer que, je crée un dos de calcul de serviette qui illustrent comment ces dispositifs ne sont pas vraiment adaptés pour VDI. Prendre la Fusion ioDrive2 Duo 1,2 to qui coûte environ $13.9000,00 et soutien environ 250K aléatoire écrira des Ops ES/s. Veuillez noter que le même s’applique à tous les autres SSD PCI. Fusion Specs IO IO Fusion Prix liste de concevoir une solution sans entrer dans les architectures de serveur trop coûteuses qui prennent en charge jusqu’à 1 to de RAM j’ai sélectionnera un hôte quad socket avec 8 cœurs par socket et fonctionnera 6 machines virtuelles par cœur au moyenne de 400MHz chacun. En supposant qu’environ 30 % vSphere transparent page sharing (TPS), le serveur nécessitera environ 300 Go de RAM.Pour voir l’article original, cliquez ici […]

Leave a Reply