Although Cisco UCS is the fastest growing Blade platform in
the world, you can’t ignore the current market dominance of the HP
C7000. So a very popular question the team and I get asked regularly, is
how should I connect my C7000 to a Cisco
Nexus converged network.
So we have a lot of options for this, but also a lot of
issues as well.
From a HP point of view Virtual
Connect is always going to be the way to go, and will have no doubt already
been configured into the C7000 by default. There is now of course a big push by
HP to look at the HP
5820 switch series with advanced Flex-Chassis for the network, but being
realistic it is more likely to be connecting to an existing Cisco network, so
the customer is always interested in the Nexus range of switches.
FCoE will pop into the conversation and really should be the
answer, however the HP FCoE protocol doesn’t currently support multi-hop FCoE so
needs to be retained within the Chassis as the link from the Blade to the
Virtual Connect and no further.
So we need some 10GB uplinks for our LAN and some 4GB/8GB FC
uplinks for our Block base SAN connectivity.
Fortunately the Nexus switch is designed to facilitate both
of these communication methods.
For LAN connectivity we can take advantage of the Virtual
Port Channel capabilities of the Nexus range, using Cross-connected links
to each Virtual Connect. A virtual PortChannel (vPC) allows links that are
physically connected to two different Cisco Nexus™ 5500 Series devices to
appear as a single PortChannel to a third device. The only question with this design
is how many Uplinks to use and how to configure my NIC’s within the Virtual Connect
technology. I could use all 8x uplink ports on my FlexFabric 10Gb/24-port
module for LAN connectivity, but generally I want to keep two ports for Fibre
Channel. This leaves me 2-6 Uplinks, but in reality 2x 10GBe Uplinks should be
enough for even the most densely populated Chassis configuration (40GB in
total). Downstream to my blade I can configure up to 4x NIC’s on my on-board
LOM for connectivity to the FlexFabric, but as I want 1 to be a HBA we’re left
with just 3x NIC’s per FlexFabric. I can now allocate different ports speeds to
each NIC, which basically allows me to dissect the 10GB of bandwidth across
these 3 interfaces.
See the Virtual Connect FlexFabric Cookbook for further design options.
See the Virtual Connect FlexFabric Cookbook for further design options.
Now the SAN connectivity. As I mentioned above, we have 2x
uplinks available for the SAN and 1x HBA from each FlexFabric. These can run at
either 4GB or 8GB native Fibre Channel. Of course we want to stick to the
standard dual Fabric, Storage Area Network design, so don’t confuse this with
the LAN VPC design and cabling! Each FlexFabric should be defined as a single
side of the SAN and connected only to a Single Nexus 5500, although the dual
links should be used for Link redundancy.
It is important to mention at this stage however that
currently there is compatibility between the Virtual Connect range of switches
and Nexus 5000 Series when running at 8GB for FC. The only fix currently is to
clock the FlexFabric uplinks down to 4GB FC. HP and Cisco are aware of the
problem and further information can be found under BUG ID CSCtx52991 at Cisco Bug Toolkit.
Here is a useful example of the connectivity for the
FlexFabric through to Nexus, although it is worth highlighting that at present
the FlexFabric doesn’t support a SAN port-channel across it’s uplinks to the
Nexus, so the only port-channels will be vPC’s to the LAN.
So no 8GB FC, no SAN port-channels and no multi-hop FCoE….
this isn’t sounding too great!
Well what’s the alternative?
There are a few Architectural differences that we could look at through Xsigo and of course Cisco UCS, but for the simplest change that still gives us the connectivity we require then the way to go has got to be the Cisco/HP B22HP Fabric Extender!
There are a few Architectural differences that we could look at through Xsigo and of course Cisco UCS, but for the simplest change that still gives us the connectivity we require then the way to go has got to be the Cisco/HP B22HP Fabric Extender!
What’s so good about the B22HP you may ask, well the answer
is simple. It delivers everything and more from a connectivity point of view,
as well as simplified network configuration. As with other Nexus Fabric
Extenders, it is seen as a remote Line card from the parent Nexus switch, and is
able to provide the full capabilities on the Nexus switch directly to the HP
blade. The B22HP provides truly converged networking extending 10GB FCoE up to
the Nexus parent switch, and if required multi-hop FCoE all the way to the
Storage Array.
The B22HP can be configured in a single homed or dual homed
design benefiting from the Nexus vPC technology and providing flexibility in
deployments, please see the design
guide for further details.
Although the B22HP doesn’t currently provide the Dynamic
Multi-NIC configuration of the Virtual Connect, it will provide Enterprise
Class QoS delivered right to the blade. So intelligent use of QoS to the host
can allow for consistent, reliable and high performance networking throughout
the infrastructure.
For increased flexibility the B22HP is available in two
flavours, the default SFP+ version allows for the use of standard 10GB SFP
transceivers including Twinax (up to 10Meters) to reduce costs. As well as the FET
bundle version that includes 16x SFP+ Fabric Extender Transceivers (8 for
each end) specifically design to reduce the cost of Fibre-Optic connectivity
for the Nexus range of switches.
With Cisco continually developing the Nexus range and the
growth of support for VM-FEX and Adapter-FEX
technologies the B22HP is really looking like the right path to take your C7000
chassis into the future of converged infrastructure.
…… well at least until you move to Cisco UCS!