Hands on with the Cisco UCS C200 M2

The hype machine in Cisco channel land has been working overtime since Cisco started shipping its new USC line of servers and blade center. If you’ve heard what is being said, Cisco is basically claiming to have reinvented the server and is now offering unparalleled performance over their competitors. Their one large claim to fame on the onset was their VMware VMark scores, but as of this writing HP has bested them in every category by small margins. The other key selling point is Cisco’s Extended Memory Technology which allows an increased amount of physical RAM in UCS servers aimed at providing greater virtual machine density.

Cisco, in my view,  has never been a company overly concerned about sexiness in their hardware or software, although they certainly tried harder than usual with their king Nexus 7000 switch. The UCS C200 servers I have acquired will be used to power a new virtualized Unified Communications infrastructure (Call Manager) which is another major advancement in Cisco’s product offerings. So while my use case will not push these servers to their theoretical performance limits, I will still get down and dirty with this new hardware platform. 

Under the hood

My first impression of the C200 is that it looks remarkably similar to an older lower-end Dell PowerEdge or SuperMicro white box server. Aesthetically pretty vanilla, at this level anyway. That said, the layout is simple and gets the job done in true minimalist fashion. All internal components are OEM’d from the usual suspects: Intel, Samsung, Seagate, LSI… Getting the cover off of this thing is truly a pain requiring a lot of hard release button mashing and downward forceful pushing. Both of my C200’s were like this so definitely not a fluke.

 
 
 
 
 
 
 
The C200 ships with 2 Gigabit NICs for host traffic and 1 NIC for out of band management (CIMC). VGA and USB ports are in the rear with a proprietary KVM dongle port on the front of the server. 2 expansion slots and dual power supplies are also available. Although effective, I dislike this style of power cord retainer which is also used by NetApp.
 
 
The rail kit is where Cisco really dropped the ball as I guess they assumed that all their customers would be using extended depth racks. The rails are tool-less snap-ins with adjustable slides, the problem is that the rail itself does not adjust and cannot be made any shorter.
 
 
For standard depth racks the tail of these rails stick out past the posts.
 
 
I had to rack this server in the middle of my rack or the tail on the right side would block a row of PDU ports. (wtf!)
 

Cisco Integrated Management Controller (CIMC)

CIMC is the remote out-of-band management solution (IPMI) provided with Cisco servers. With the very mature HP ILO and Dell DRAC remote management platforms around for years, Cisco’s freshman attempt in this space is very impressive indeed. All of the basic data and functionality you would expect to find is here plus a lot more. Access to the CIMC GUI requires Adobe Flash via a web browser which is visually pretty but disappointing to see in an enterprise platform. They certainly aren’t the only major vendor to start trending this direction (read: VMware View 4.5).

Performance is a bit slow for tabs on some pages where the hardware has to be polled and display data refreshed. But when that data eventually trickles in, the level of detail is dense.

The Network Adapters tab was misbehaving for me on both of my servers. After a few seconds all these amazing options disappear and an error:timed out pop-up appears. This will be incredible once they (assumedly) fix their code. Notice the tabs in the middle for vNICs and vHBAs intended to provide tight virtualization integration.
 
 
 
Really great detail…
 
 

That was all just from the Inventory page! More great detail is revealed in the Sensors section with multiple readings and values for each core component of the server.

There are a few other notable features that Cisco has included that are particularly cool. One of which is the ability to configure certain BIOS parameters from within the CIMC.
 
 
Some variables that can only be configured during boot time in other platforms can be changed via CIMC, although some if not most of these changes will require a reboot to take effect.
 
 

Other user, session, logs, and firmware management options include all the usual settings and variables. One other neat option in the Utilities sub menu is the ability to reboot CIMC, reset it to factory default as well as import configurations! That’s huge and will make managing multiple servers much more coherent. All told and bugs aside, the potential of CIMC is very impressive.

Call Manager -  the virtual edition

 
 

A major shift for Cisco, now available in CUCM Version 8.x, is the ability to deploy the enterprise voice architecture inside of VMware ESXi. Call manager, and it’s sister voice mail service Unity Connection, are just Linux servers (RHEL 4) after all so this makes perfect sense. You can now deploy Call Manager and Unity clusters inside of a virtual space while leveraging the HA provided by VMware as well.

This of course doesn’t come without its caveats. Currently Cisco does not support VMs living outside of Cisco servers and that includes storage. So you will have to buy a Cisco server to deploy this solution as well as keep the VMs on Cisco disk, not your own corporate SAN. You can use your own VMware licensing and vCenter at least which is a good thing. Once Cisco has established a comfortable foothold in the enterprise server market, look for these policies to ease a bit. Right now they need to sell servers!

To ensure that partners and customers deploy CUCM in a consistent fashion, Cisco has released open virtual machine templates (OVA) for their deployments. OVAs keep things nice and clean, even if you won’t agree with their choice of virtual hardware (LSI Logic parallel vs LSI SAS). CUCM is still managed the same way, via web browser, and the interface is exactly the same in v8 as it was in v7.x.
 

Not purely Cisco-related, but a minor observation that others have noticed as well is that ESXi incorrectly reports the status of Hyper-Threading support on non-HT Intel-based servers. My C200 is equipped with Xeon E5506 CPUs which do not support HT. Not a big deal, just an observation. If HT was available in this CPU I would definitely enable it as ESX(i) 4.1 can now schedule much more efficiently with the new Intel CPU architectures.


Wrap

All in all there’s a lot to like about the new Cisco offerings. A commitment to virtualization and hardware optimized to run virtual workloads are smart investments to make right now. There are some physical design choices that I don’t particularly care for but this model server is at the bottom of the platform stack, so maybe more consideration was paid to the platforms at the top? CIMC was carefully constructed and, although buggy right now, shows some real innovation over competing platforms in this space. More companies that would not have otherwise been able to buy into a full-blown Call Manager cluster configuration can now do so with reduced hardware investments.


References:
Cisco OVA templates

0 Comments