UCS and UCSM Basics

I had this post pre-written from a while ago and was not going to publish it, but as people still ask me about UCS I decided to publish it. In few words, UCS is a BladeCenter chassis that integrates compute, networking and Storage (FCoE) at backplane level whilst adding management capabilities through UCSM (USC Manager).

UCS (Unified Computing System) is Cisco’s first entry into the x86 server market. This paper provides insight into both the Cisco Data Centre 3.0 strategy, and UCS, which is a key enabler to DC 3.0.  It discusses products within the UCS range at a component level, such as blade and rack mount server offerings, and lastly highlights some of the larger benefits that can be gained through the pairing of x86 virtualisation and UCS technology.

UCSM runs on a Cisco USC 6120 Series Fabric Interconnects and supports 20 north (upLink) and southbound (downLink) connections or ports (6140 has 40 ports). These appliances are sold in pairs and bundled with UCS chassis and are clustered (Active/Passive) for redundancy purposes. UCSM seats on top of the blade chassis and are responsible for the overall management of the UCS stack.

Inside USCM there is a Linux OS built-in into a flash ROM and a Web Server hosts a Java app that provides the management interface. Each appliance receives a physical IP address from the same management subnet and a virtual IP is shared amongst both of them. Just like other load balancing solutions.

The blade servers host a CNA (Converged Network Adapter) that supports both Ethernet and FCoE (Fibre Channel over Ethernet).  If you are not familiar with CNAs – yes, there is no physical segmentation between NICs and HBAs. The CNA’s connect internally to a Fibre Extender (I/O Module or IOM) and then is cascaded to the UCS Cisco 6100 Series Fabric Interconnects .


One. Cluster Ports

There are 4 ports here, two of these are the dual 10/100/1000 Ethernet clustering ports which are used for connected two 6120/40s together, they do sync and hearbeat. You direct connect these with a standard Ethernet cable. The other two ports are reserved for future use. All of these ports are dedicated and you can not use them for any other purpose.

Two. Management port.

This is a dedicated 10/100/1000-Mbps Ethernet management port for out-of-band management.

Three & Four. SFP+ ports

The SFP+ ports take a number of cable types (copper or fibre) of varying lengths. They may be used to connect to the 2100 Fabric Extenders (FeX) modules inside the 5100 Chassis (that contains the blades). They may also be used to connect up to your datacenter switching core or aggregation point. We are going to come back to these two in some more detail.

Five. Expansion modules.

The expansion modules are used to provide further external connectivity. There are three types available.

  • Ethernet module that provides 6 ports of 10 Gigabit Ethernet using the SFP+ interface
  • Fibre Channel plus Ethernet module that provides 4 ports of 10 Gigabit Ethernet using the SFP+ interface; and 4 ports of 1/2/4-Gbps native Fibre Channel connectivity using the SFP interface
  • Fibre Channel module that provides 8 ports of 1/2/4-Gbps native Fibre Channel using the SFP interface for transparent connectivity with existing Fibre Channel networks

The Blades are standard x86 B-Series and 8 of them (B200) can be hosted in a single UCS chassis. The B250 uses two chassis’s slots.

Blades can be purchased in two models:

  1. The Cisco UCS B200-M1 Blade Server balances simplicity, performance, and density for production-level virtualization and other mainstream data-center workloads.
  2. The Cisco UCS B250-M1 Extended Memory Blade Server maximizes performance and capacity for demanding virtualization and large dataset workloads with up to 384GB of industry standard memory. This technology also offers a more cost-effective memory footprint for less-demanding workloads.

Fig 1 – This is the back of the UCS chassis (not cabled)

Fig 2 – This is the front of the chassis. You will see two racks with two chassis each. Each chassis contain 8 blades.

UCSM is the brains and the interface for the UCS stack and rack. Here are some Cisco videos about racking an installing the kit.

Rack and Install the Cisco (UCS) 6120XP Chassis
Unboxing the Cisco UCS Server Chassis
Rack and Install the Cisco UCS 5108 Server Chassis

Now let’s see UCSM

To get a feeling of what the interface look like watch the following two quick videos:

Unified Computing System Manager Revealed (Part 1)
Unified Computing System Manager Revealed (Part 2)

Service Profiles

If I had to describe UCSM Service Profiles I would say that it is a mix of vSphere Host Profiles and manually editing of VMX files. There is a serious amount of configurations switches and polices to be discussed here.

The configuration of a service profile can be done in Simple or Expert mode and it is possible to configure pools of servers, vNICs, vHBAs, vSANs, IPs, WWNPs, MACs, Boot Order etc.. and these pools can be assigned to service profiles.

Service profiles have a one-to-one relationship to the B-Series blades and they can only be reused if disassociated from the blade already using it. It is possible to clone service profiles however you should take care while doing it because the same MAC addresses, WWNPs will be rolled to the new service profile so you will have to manually reconfigure.

To automate the process it is possible to create a service profile template, associate pools of resources and create a new service profile based on this template. Once a resource from the pool is used it will only be used by a different blade when freed up. It works similar to IP Pools in vSphere.

When a service profile is applied to a blade the server will boot from a Linux Kernel and then apply the changes to the hardware. This can all be seen via KVM.

Initial Installation/Setup

Connection via Serial/USB cable to the 6100’s console port and few questions have to be answered. Here is a quick BLURRED photo of the welcome screen.


If the 6100 Fabric Interconnects are already connected to each other the cluster will be built automatically but in case they have been turned on without the cables then it is necessary to login to each one of them and answer to the initial questions. Don’t worry as they are pretty straight forward.

The complete Cisco UCS Manager GUI Configuration Guide, Release 1.x can be downloaded from here however Cisco is constantly changing the documentation and recommends online viewing.

I would recommend the following reading if you want to get down to how the hardware works and understand each screen and configuration options of UCSM:Project California: a Data Center Virtualization Server – UCS

Few interesting and important notes:

  • The Fabric Extender (I/O Module or CNA) may use the following to connect to Cisco 6100: Fibre, Cooper or SFP+ with a maximum of 10 meters but preferentially 7 meters.
  • Standard UCS Kick-start product has only 8 port licence for 6100
  • CNA’s can be chosen – Ethernet/HBA (Equallogic or Emulex), Ethernet 10Gb (Oplin), Palo (Cisco chip with virtualization technology but not avaiable for ordering yet)
  • UCS 6100 northbound connections to the legacy infrastructure only 10Gb (1GB to be released)
  • USC 6100 cluster (Active/Passive) will not fall back automatically in case of a failure
  • All ports inside UCSM are initially disabled and must be enabled
  • Server Ports = SouthBound / Uplink Host = NorthBound
  • Not possible to bind 2 blades together
  • Bladelogic Add-On is already being sold by BMC (Integrates UCS and Vmware Vcenter)
  • You should always label FibreExternder as A and B and NEVER cross-over cables
  • B250 servers come with 2 CNA’s (mezzanine adapters)
  • SAN access via FcoE and SAN boot only possible using Menlo and Palo mezzanines
  • Each 6100 Fabric Interconnect port must be licensed past the 8th port
  • KVM require a valid block of IPs (This is not requested on Quickstart Site Planning Guide)
  • Virtual Media Server only mount ISO and IMG trough the workstation, no FQDN.
  • Licences are specific to each system’s serial number.
  • For some ETH cluster and console connectors you will need cables without sleeves. See pic. clip_image007

Have Fun

Comments have been disabled.