UCS Fabric Interconnect

UCS Fabric Interconnect

The UCS 6100 Fabric Interconnects are derived from the hardware of the Nexus 5000 and as such, they can support a variety of network protocols and are capable of acting as Ethernet switches. In the definition of UCS, it became clear that the system is network centric. As such, it can be configured to behave on the Ethernet uplinks either as a group of hosts or as a switch.

The first possibility is to configure FI as Ethernet switches. In this case, the spanning tree protocol needs to be enabled and it detects a loop. This is a suboptimal solution; although it provides high availability, uplink bandwidth is wasted as half the uplinks are not used by traffic as they are in blocking state. To solve this issue, several different techniques can be used like vPC or VSS. I tried to google if FI supports FP but I was not able to find it (2015).

VSS and vPC are techniques implemented on the LAN switches to allow the Fabric Interconnects to keep using EtherChannel in a traditional manner. In addition, the same problem can also be solved on the Fabric Interconnect by a technique called Ethernet Host Virtualizer (aka End Host Mode).

End Host Mode

In Ethernet end host mode forwarding is based on server-to-uplink pinning. A given server interface uses a given uplink regardless of the destination it’s trying to reach. Therefore, fabric interconnects don’t learn MAC addresses from external LAN switches, they learn MACs from servers inside the chassis only. The address table is managed so that it only contains MAC addresses of stations connected to Server Ports. Addresses are not learned on frames from network ports; and frames from Server Ports are allowed to be forwarded only when their source addresses have been learned into the switch forwarding table. Frames sourced from stations inside UCS take optimal paths to all destinations (unicast or multicast) inside. If these frames need to leave UCS, they only exit on their pinned network port. Frames received on network ports are filtered, based on various checks, with an overriding requirement that any frame received from outside to UCS must not be forwarded back out of UCS. However fabric interconnects do perform local switching for server to server traffic. This is required because a LAN switch will by default never forward traffic back out the interface it came in on.

Ethernet ports on the fabric interconnects are unconfigured by default. The ports can be configured to be:
  • Uplink ports
  • Server ports
  • Appliance ports
  • Monitor ports

In end-host mode, Cisco UCS is an end host to an external Ethernet network. The external LAN sees the Cisco UCS fabric interconnect as an end host with multiple adapters (multiple MAC Addresses).

End-host mode features include:
  • Spanning Tree Protocol is not run on both the uplink ports and the server ports.
  • MAC address learning occurs only on the server ports; MAC address movement is fully supported.
  • Links are active-active regardless of the number of uplink switches.
  • The system is highly scalable because the control plane is not occupied.
Server links (vNICs on the blades) are associated with a single uplink port, which may also be a PortChannel. This association process is called pinning, and the selected external interface is called a pinned uplink port. The pinning process can be statically configured when the vNIC is defined or dynamically configured by the system. In end-host mode, pinning is required for traffic flow to a server.
Static pinning should be used in scenarios in which a deterministic path is required. When the target on Fabric Interconnect A goes down, the corresponding failover mechanism of the vNIC goes into effect, and traffic is redirected to the target port on Fabric Interconnect B.
If the pinning is not static, then the vNIC is pinned to an operational uplink port on the same fabric interconnect, and the vNIC failover mechanisms are not invoked until all uplink ports on that fabric interconnect fail. In the absence of Spanning Tree Protocol, the fabric interconnect uses various mechanisms for loop prevention while preserving an active-active topology.

Unicast Traffic Summary

Characteristics of unicast traffic in Cisco UCS include:

  • Each server link is pinned to exactly one uplink port (or PortChannel).
  • Server-to-server Layer 2 traffic is locally switched.
  • Server-to-network traffic goes out on its pinned uplink port.
  • Network-to-server unicast traffic is forwarded to the server only if it arrives on a pinned uplink port. This feature is called the reverse path forwarding (RPF) check.
  • Server traffic received on any uplink port, except its pinned uplink port, is dropped (called the deja-vu check)
  • The server MAC address must be learned before traffic can be forwarded to it.
 unicast

Multicast and Broadcast Forwarding Summary

Characteristics of multicast and broadcast traffic in Cisco UCS include:

  • Broadcast traffic is pinned on exactly one uplink port in Cisco UCS Manager Release 1.4 and earlier and is dropped when received on the other uplink ports. In Cisco UCS Manager Release 2.0, the incoming broadcast traffic is pinned on a per-VLAN basis, depending on uplink port VLAN membership.
  • IGMP multicast groups are pinned based on IGMP snooping. Each group is pinned to exactly one uplink port.
  • Server-to-server multicast traffic is locally switched.
  • RPF and deja-vu checks also apply to multicast traffic.

End Host Mode Issues

In end-host mode, Cisco UCS presents an end host to an external Ethernet network; the external LAN sees the Cisco UCS Fabric Interconnect as an end host with multiple adapters. Fabric interconnects running Cisco UCS Manager Release 1.4 or earlier operating in end-host mode follow certain forwarding rules for handling unicast and multicast and broadcast traffic. End-host mode offers these main features:
  • Spanning Tree Protocol is not run on both the uplink ports
  • MAC address learning occurs only on the server ports and appliance ports
  • MAC address aging is not supported; MAC address changes are fully supported
  • Active-active links are used regardless of the number of uplink switches
  • The solution is highly scalable because the control plane is not occupied
  • All uplink ports connect to the same Layer 2 cloud
A single Ethernet uplink port (or PortChannel) on each fabric interconnect is chosen to be the broadcast and multicast traffic receiver for all VLANs, and incoming and broadcast traffic is dropped on the other uplinks. This port is called the G-pinned. G-pin port is the designated interface to receive multicast and broadcast in EHM and is selected by the system randomly and is not configurable. If the G-pinned port goes down, another uplink is designated automatically. To view the elected G-pinned port, use the following commands:
UCS-A# connect nxos
UCS-A(nxos)# show platform software enm internal info global | grep -A 6 `Global Params'
Other Global Params:
broadcast-if 0x88c1f04(Ethernet1/16)
multicast-if 0x88c1f04(Ethernet1/16)
ip_multicast-if 0x88c1f04(Ethernet1/16)
end-host-mode: Enabled

As a result of this behavior, the only network that will operate properly is the network to which the G-pinned port is connected. For example, if the G-pinned port is on the production network, any blade with a virtual network interface card (vNIC) in backup or public VLANs will experience problems. vNICs in those VLANs will not receive any Address Resolution Protocol (ARP) broadcast or multicast traffic from upstream.
For such a network scenario, the recommendation in Cisco UCS Manager Release 1.4 and earlier is to change to switch mode. In switch mode, spanning tree (Per-VLAN Spanning Tree Plus [PVRST+]) is run on uplinks, and broadcast and multicast traffic is handled and forwarded accordingly. Note that in switch mode, fabric interconnects operate like traditional Layer 2 switches, with spanning-tree and MAC address learning enabled on the uplinks.

In Cisco UCS Manager Release 2.0, Cisco UCS provides the flexibility to deploy nondisjoint and disjoint Layer 2 upstream networks in end-host mode. Topological simplification is achieved with end-host mode without the need to turn on switch mode in Layer 2 disjoint network deployments. Cisco UCS Manager Release 2.0 enables the following functions in end-host mode:
  • Capability to selectively assign VLANs to uplinks
  • Pinning decision based on uplink and vNIC VLAN membership
  • Allocation of a designated broadcast and multicast traffic receiver for each VLAN rather than on a global basis

Layer 2 Disjoint Upstream Packet Forwarding in End-Host Mode

Server links (vNICs on the blades) are associated with a single uplink port (which may also be a PortChannel). This process is called pinning, and the selected external interface is called a pinned uplink port. The process of pinning can be statically configured (when the vNIC is defined), or dynamically configured by the system.

In Cisco UCS Manager Release 2.0, VLAN membership on uplinks is taken into account during the dynamic pinning process. VLANs assigned to a vNIC are used to find a matching uplink. If all associated VLANs of a vNIC are not found on an uplink, pinning failure will occur.

The traffic forwarding behavior differs from that in Cisco UCS Manager Release 1.4 and earlier in the way that the incoming broadcast and multicast traffic is handled. A designated receiver is chosen for each VLAN, rather than globally as in Cisco UCS Manager Release 1.4 and earlier.

The Cisco UCS Virtual Interface Card (VIC) or Cisco UCS VIC 1280 is required if a virtualized environment requires individual virtual machines to talk to different upstream Layer 2 domains. Multiple vNICs would need to be created on the service profile in Cisco UCS, and the vNICs need to be assigned to different VMware virtual switches (vSwitches) or have a different uplink port profile assigned to them (Cisco Nexus ® 1000V Switch).

Note: The use of overlapping VLAN numbers in the upstream disjoint networks is not supported.

Fiber Channel Connectivity

The Fibre Channel connectivity has historically used a different high availability mode than the Ethernet connectivity. Most Fibre Channel Installations use two separate SANs (normally called SAN-A and SAN-B) built with two different sets of Fibre Channel switches. Each host and storage array connects to both SANs using two separate HBAs. High Availability is achieved at the application level by running multipathing software that balances the traffic across the two SANs using either an active/active or active/standby mode. A UCS supports this model by having two separate Fabric Interconnects, two separate Fabric Extenders and dual-port CNAs on the blades. One Fabric Interconnect with all the Fabric Extenders connected to it belongs to SAN-A and the other Fabric Interconnect with all the Fabric Extenders connected to it belongs to SAN-B. The two SANs are kept fully separated as in the classical Fibre Channel model. As in the case of Ethernet, each Fabric Interconnect is capable of presenting itself as a FC switch or as a FC host (NPV mode).

Fabric Interconnect as FC Switch

This consists of running the FC switching software on the Fabric Interconnect and using E_Ports (Inter Switch Links) to connect to the FC backbones. Unfortunately, this implies assigning a FC domain_ID to each UCS and since the number of domain_IDs is typically limited to 64, it is not a scalable solution. Some storage manufacturers support a number of domain that is much smaller than 64 and this further limits the applicability of this solution.

Fabric Interconnect as a Host

This solution is based on a concept that has recently been added to the FC standard and it is known as NPIV (N_Port ID Virtualization): a Fibre Channel facility allowing multiple N_Port IDs (aka FC_IDs) to share a single physical N_Port. The term NPIV is used when this feature is implemented on the host—for example, to allow multiple virtual machines to share the same FC connection. The term NPV (N_Port Virtualization) is used when this feature is implemented in an external switch that aggregates multiple N_Ports into one or more uplinks. A NPV box behaves as an NPIV-based HBA to the core Fibre Channel switches. According to these definitions, each Fabric Interconnect can be configured in NPV mode, i.e.:

  • Each Fabric Interconnect presents itself to the FC network as a host, i.e., it uses an N_Port (Node Port).
  • The N_Port on the Fabric Interconnect is connected to an F_Port (Fabric Port) on the Fibre Channel Network.
  • The Fabric Interconnect performs the first FLOGI to bring up the link between the N_Port and the F_Port.
  • The FLOGIs received by the Fabric Interconnect from the server’s adapter are translated to FDISCs according to the NPIV standard.

This eliminates the scalability issue, since it does not assign a FC domain_ID to each Fabric Interconnect. It also greatly simplifies interoperability, since multivendor interoperability is much better in FC between N_Port and F_Port as opposed to E_Port and E_Port. Finally, it guarantees the same high availability present today in a pure FC installation by fully preserving the dual fabric model. Value added features that can be used in NPV mode are F_Port Trunking and F_Port Channeling:

  • F_Port Channeling is similar to EtherChannel, but it applies to FC. It is the bundling of multiple physical interfaces into one logical high-bandwidth link. F_Port Channeling provides higher bandwidth, increased link redundancy, and load balancing between a Fabric Interconnect and a FC switch.
  • F_Port Trunking allows a single F_Port to carry the traffic of multiple VSANs, according to the FC standards.