Did Microsoft & Citrix just say No to Cisco UCS?

More details on RemoteFX are revealed,  and unless I have missed something on the way I am presuming that the joint desktop virtualisation strategy between Microsoft and Citrix is saying a big NO to the high density blade systems such as Cisco UCS.

According to the joint announcement the RemoteFX require one or more GPU per server. Microsoft also mention that it is possible to add multiple GPUs to each physical server, either via the PCI riser card in the server chassis or via external PCI chassis that could house lots of cards.

Cisco UCS is a self contained platform built from the ground up to provide the best VM density as possible and as far as I know do not provide PCIe slots (I want to be mistaken, so please correct me if I’m wrong).

How are Cisco UCS customers going to adopt a technology that does run on their platform? I just checked HP Matrix and it does provide support for PCIe at blade level but what about all other blade systems?

5 comments

Skip to comment form

    • nate on 03/25/2010 at 4:50 am

    why would they care? They came out with some new requirements, it’s up to the other vendors to then meet those requirements, I don’t see why MS would say yes or no to anyone in particular, same for Citrix. The new standard obviously provides better graphics support and the easiest way to do that is apparently have a real GPU(s) on the system.

    I’m sure it’s only a matter of time before blade makers make GPUs as an add-on module much like many offer ethernet, fiber channel, infiniband, and fusion io(in HP’s case). Can’t be that hard.

    I’m a long time Cisco hater(fully open and honest about it) myself but don’t see this as MS saying no to them.

  1. As someone who is intimately familiar with HP blades, I CAN tell you that the HP PCIe is pretty much the same as the Cisco PCIe when it comes to blades. It is with a mezzanine adapter. I don’t know if HP is planning a GPU mezzanine, but how would you connect? The I/O is not designed for this.

    I don’t know of ANY blade that supports a “hopped up” GPU. So if you want to use borg ware VDI, then you will need to use something with more traditional PCIe slots.

    • David Barker on 03/25/2010 at 9:25 am

    @Dave Convery
    Erm – you can buy terradici pcoip cards for Dell m series blades and other blade solutions. No, it’s not a full GPU, but it’s pretty close…

    http://www.teradici.com/pcoip/pcoip-products/oem-solutions.php

    It’s not much of a stretch to imagine a remotefx card.

  2. Disclosure: I’m a VMware employee, however this is my personal opinion.

    As there is no support for RemoteFX provided by Blade vendors today, I can only assume that Microsoft has not done their homework properly.

    If organizations have to wait 12 months to get their hands on a new piece of hardware that supports RemoteFX, and assuming that they do not have to replace their current blade platform, in my opinion MS is setup for failure.

    VMware on the other hand always work hard with partners to create an extensive and efficient ecosystem for their products.

  3. 1. Remote FX is not comparable to Terradici cards offloading the Remoteprotokoll – it serves only eye candy. You cannot increase the vDesktop density by using RemoteFX as you can do it with Terradici.

    You cannot compare hardware offloaded graphic transport protocoll with hardware offloaded 3D graphic.

    2. Except “coolness” there is no use buisness use case for RemoteFX. VMWares counterpart for RemoteFX is the Software 3D support – but i assume RemoteFX is much superior. I assume in far away future it will be the technology of choice for cloud gaming – such hosters will surely not use Cisco blades – they will build up their own boxes from the cheap spotmarket.

    3. You can add any kind formfactor PCIe devices to a HP c-Class Blade by using a special PCIe Blade. I am not aware if other “passive backplane” vendor like DELL or Fujitsu BX series have PCIe lanes on their backplane. I would consider them for Terradici or SSL offloading cards – you can use GPU for SSL offloading par excelance.

    4. Cisco does not lock out – they are ideal for “fit to size” XenApp or RDS farms – why you need UCS for VMWare – VMWares layer on a small set of different server bases is sufficient for scale out. UCS is cool – but is it really necessary ? Chopping RAM over different blades would be cool in a small datacenter enviroment. In big DCs i dont see a HUGE advantage against a HP C-Class Flexfabric enviroment where i have some huge and some small Blades and automate by profiles.

Leave a Reply