«

»

Dec 23 2012

My Work & Home Lab Environments

Advertisement

This is one of those geek posts that are interesting to the community IMO. I am frequently asked about the lab environment I am using for prototyping End User Computing stuff and general VMware product learning and testing. People ask me what kind of physical hardware and logical setup I am using to interact with multiple different VMware products.

The configuration is rather simple but powerful enough  for what I need to get done on a day-to-day basis. I have under my desk  (literally! – I call it “my cloud”) two HP ProLiant ML370 G6 with 12 cores at 2.80GHz and 192GB DRAM each. The servers are connected to a Gigabit switch (also under my desk) with two NICs each.

I don’t have shared storage array available to me so I was presenting a RAID groups of local spinning hard disks in one of the hosts with a Nexenta Community Appliance. Nexenta exposes NFS shares to both servers running vSphere 5.1.

One of the big advantages of Nexenta Community Edition is the ability to assign host DRAM as cache. In my case I have 16GB DRAM assign to the virtual appliance. BTW – Did I mention Nexenta Community Edition is Free!?

Despite having large cache I was still having IO performance issues due to the slowness of the spinning disks underneath. Enter the Virident FlashMAX II (I `previously wrote about the Virident PCIe card here). The team at Virident was kind enough to provide me with one of their amazingly fast Storage Class Memory (SCM) in a PCIe form factor (picture below).

Initially I was leveraging another two features from the Nexenta Community Appliance, ZIL and L2ARC. ZFS intent log or ZIL uses SSD to store synchronous writes until they’re safely written to the main disk pool. By using the Virident device as a ZFS log device, I was able to accelerate the synchronous write performance. The second feature, L2ARC, is a second level for the ARC memory (the 16GB DRAM assigned to the Nexenta appliance) and is used as additional cache device.

I soon realized that with the amount of IOPS the Virident card was able to deliver I didn’t need ZIL or L2ARC. So I moved the entire data pool from the local spinning disks to the Virident card itself. That worked well and really accelerated the entire storage stack, however with the PCIe card I only had 555GB of usable capacity. Enter another two Nexenta features – compression and de-duplication. With compression and de-duplication enabled I was able to fit approximately 1.8x the raw amount of data.

Truth be told here, I had some issues with the de-duplication feature as my data grew and the Nexenta team asked me to disable the feature and only utilize LZH compression. So, I would not plan a production Nexenta deployment using de-duplication.

 

Here is my little screaming Virident FlashMAX II in host number 1:

photo[1]

This is the high level overview for the architecture I just described:

logical01

On top of the physical hosts running vSphere 5.1 I created 4 virtual ESX VMs, each with 32G RAM. In the picture below you will also find a vCenter Appliance to manage the physical hosts, another vCenter Appliance to manage a vSphere 5.0 virtual environment and yet another one to manage a 5.1 environment.

Why do I need so many different environments? Because whenever I am working on a prototype VMware is always ahead of me. In this example, I was working on a prototype with vCloud Director 1.5 and then vCloud 5.1 was release and I had to port the entire solution to vSphere 5.1. Now I keep them there for compatibility purposes.

lab_01

This is an example of what I am running in one of the virtual vCenter Appliances.

lab_02

Moving Forward!

I have recently received a APEX2800 card from Teracidi (Thanks!) for some performance testing. Despite I have not yet been able to run any concrete test I am looking forward to see the results when running frame intensive workloads. I believe this is the use case that will make most sense for this card. Maybe I can couple the APEX2800 with a nVidia VGX offload card. The APEX2800 offloads the PCoIP encoding process while the VGX would offload graphics rendering to the GPU.

photo 2

As you can see it’s not a huge lab environment, however I think what makes the difference here is the amount of DRAM I have in each host. As for storage, no doubt having the Virident card is a huge improvement from using spinning disks. However, due to the gigabit Ethernet connectivity the NFS performance ends up not being as fast as it could be.

The second drawback is the Virident PCIe card placed in single host. The Nexenta virtual appliance cannot be migrated to the other host; making upgrades more difficult. For that reason I am looking forward to get either a shared storage array with good IO throughput or start to make use of one of many VMware storage technologies that are being tech-previewed.

Home

At home a I also have a lab where I do most of the tests related to my blog posts, video productions and where I also test products from other vendors (Know your enemy and know yourself and you can fight a hundred battles without disaster – Sun Tzu). The home lab has not been upgraded in a little while and this blog post from August, 2010 is still pretty much accurate.

This article was first published by Andre Leibovici (@andreleibovici) at myvirtualcloud.net.

Similar Posts:

Permanent link to this article: http://myvirtualcloud.net/?p=4173

2 comments

1 ping

  1. Simon

    How do you handle Microsoft Licensing for your servers and clients?

    1. Andre Leibovici

      Simon, I would recommend you looking at MSDN subscription. Starts at about $500 for renewals; or you may get it trough the organization you work for if they allow you.

      Andre

  1. Welcome to vSphere-land! » Home Lab Links

    […] Nash’s Blog) VMware vSphere Home Lab – “The Green Machines” (Kendrick Coleman) My Work & Home Lab Environments (My Virtual Cloud) My ESX future version compatible WhiteBox (NTPro.nl) New ESX WhiteBox Asus […]

Leave a Reply