Display protocols for VDI are still very much a subject to be discussed, and in most environments tuned. The out-of-the-box setting from ANY of vendor will work for many situations; however every environment presents different challenges, making optimization a must.
I previous articles I have covered PCoIP optimization from a display protocol standpoint. My article Optimising PCoIP Display & Imaging cover the basics about how to optimize settings such as display frame rate and image quality. My article PCoIP: Unleash the Throughput demonstrate how to manage PCoIP protocol in environments with high percentage of packet loss and tail drops. Yet, I published a quick guide about How to troubleshoot PCoIP performance.
However, I have never discussed the PCoIP protocol from a networking perspective. There are some basic networking guidelines that should be followed in order to avoid network bottlenecks that could cause experience degradation. If the degradation is caused by misconfiguration on the network layer there is not much (other from deprecating user experience) that can be done to improve the overall user experience.
1 – Minimize the buffers in network routers and switches
Set buffers to absorb 50ms to 100ms of PCoIP traffic. Large buffers may increase tail-drops, and those are bad for PCoIP sessions and is a common cause of Session Disconnects and Disruptions. When there are congestion on network devices the packets are dropped until the congestion is eliminated and the queue is no longer full.
Uncontrolled packet loss may indicate congestion causing PCoIP to drop to the minimum image quality and causing degradation to the user experience. Remember always that significant tail-drop can result in session disconnects.
2 – Set congestion avoidance to WRED
WRED drops packets in a controlled way and PCoIP reacts better to WRED, and goes into a build-to-lossless phase. Weighted RED (WRED) drops packets selectively based on IP precedence. Packets with a higher IP precedence are less likely to be dropped than packets with a lower precedence. Thus, higher priority traffic is delivered with a higher probability than lower priority traffic.
Cisco has a step-by-step guide at http://www.cisco.com/en/US/docs/ios/12_1/qos/configuration/guide/qcdwred.html#wp1000898
3 – Ensure round trip latency is within limits
For software based PCoIP implementations(aka VMware View only) the latency should be below 250ms. For implementations utilizing PCoIP host cards the round trip latency must be below 150ms.
4 – Set a lower MTU to avoid datagram fragmentation
The MTU setting controls the maximum Ethernet packet size each VM or network device will send and receive. While you may know the MTU settings within the boundaries of your organization, it is not possible to guarantee the same datagram size when users are out and about. ISPs and Internet backbone router and equipment will chop up (fragment) any packets larger than their limit. These parts are then reassembled by the target network device before reading. This fragmentation and reassembly is not optimal. Windows and most network devices will default to 1500 bytes. Set PCoIP MTU to 1300 bytes to avoid PCoIP datagram fragmentation. This change can be done via Windows registry hack or via PCoIP ADM templates for Group Policies provided with VMware View Connection Servers.
A simple way to put all this information in effect is to utilize the formula below:
For a scenario where MTU=300 bytes and Link rate=10Mbps with optimum burst buffer of 50ms.
A network infrastructure properly design to support a high density of display sessions is a critical component in any VDI solution. Remember – on top of the networking recommendations it is possible to fine tune PCoIP display protocol to achieve the best possible user experience for your organization needs within your environment constraints.