VMware® ESX®/vSphere®, Linux® XEN®, and Microsoft® Hyper-V® are well-known virtualization solutions that enable multiple Virtual Machines (VMs) on a single server through the coordination of a hypervisor. A VM is an instantiation of a logical server that behaves as a standalone server, but it shares the hardware and network resources with the other VMs. The hypervisor implements VM to VM communication using a “software switch” module, this creates a different model compared to standalone servers.
Standalone servers connect to one or more Ethernet switches through dedicated switch ports. Network policies applied to these Ethernet switch ports are effectively applied to the single standalone servers. Moreover the network technician can control and troubleshoot network settings for standalone servers easily. A logical server running in a VM connects to the software switch module in the hypervisor and this in turn connects to one or more Ethernet switches. Network policies applied to the Ethernet switch ports are not very effective, since they are applied to all the VMs (i.e., logical servers) and cannot be differentiated per VM.
Attempts to specify such policies in term of source MAC addresses are also not effective, since the MAC addresses used by VMs are assigned by the virtualization software and can change over time. Also making the VMs to send traffic always via the physical port where you would set up some kind of policy is not much effective since the switch cannot send frame towards the same port he received the frame. Other means have to be invented to solve this problem, like the ones below
SR-IOV (Single Root Input/Output Virtualization)
SR-IOV is a specification that allows a PCIe device to appear to be multiple separate physical PCIe devices. SR-IOV works by introducing the idea of physical functions (PFs) and virtual functions (VFs). Physical functions (PFs) are full-featured PCIe functions; virtual functions (VFs) are “lightweight” functions that lack configuration resources. SR-IOV enables network traffic to bypass the software switch layer of the Hypervisor virtualization stack. Because the VF is assigned to a VM, the network traffic flows directly between the VF and VM. As a result, the I/O overhead in the software emulation layer is diminished and achieves network performance that is nearly the same performance as in nonvirtualized environments.
Edge Virtual Bridging (EVB) vs Bridge Port Extention vs VN-Link
Tagging = inserting some information into existing protocol
Encapsulation = encapsulating protocol with whole new protocol
EVB and Bridge Port Extention are two IEEE standards providing physical network type visibility to non-physical or non-directly attached devices.
EVB is the 802.1Qbg and it is based on VEPA. VEPA stands for Virtual Ethernet Port Aggregator and it is protocol developed by HP for providing consistent network control and monitoring for Virtual Machines (of any type.) VEPA comes in two major forms: a standard mode which requires minor software updates to the VEB functionality as well as upstream switch firmware updates, and a multi-channel mode which will require additional intelligence on the upstream switch.
Bridge Port Extention is the former 802.1Qbh standard renamed to 802.1BR. It is based on VN-Tag standard proposed by Cisco as a potential solution to both of the problems: network awareness and control of VMs, and access layer extension without extending management and STP domains. I can imagine the VN-Tag as regular FEX solution in VM environment however the 802.1BR standard can do a lot more. In fact the VN-Tag was firstly used in the FEX and then Cisco realized it can be used in virtual environment to solve virtual access layer problems.
Then there is the another solution from Cisco called VN-Link. When you google VN-Link the Cisco Nexus 1000V is the first thing that POP-UP. VN-Link can be implemented in two ways:
- As a Cisco DVS running entirely in software within the hypervisor layer (Cisco Nexus 1000V Series)
- With a new class of devices that support network interface virtualization (NIV) and eliminate the need for software-based switching within hypervisors. This is utilizing the VN-Tag
As you see from the sections above its quite mess. In summary there are many proprietary solutions to solve the VM switching problem. HP VEPA and Cisco VN-Link. Cisco VN-Link can be just a virtual distributed switch in hypervisor (Cisco 1000v) or it can be based on Cisco VN-Tag architecture and the switching is completely offloaded from the hypervisor to hardware capable layer (Cisco Palo adapter is capable of inserting the VN-Tag. Then you need just physical access layer switch – NIV implementing data plane logic to understand Cisco Palo adapter tagging). Then the IEEE came and is trying to standardize all this mess. The 802.1Qbg is based on VEPA and 802.1BR (former 802.1Qbh) is based on VN-Tag. I will describe all these technologies in separate article „Access layer in Virtual Environment“ as it is quite mess, haha 😉 But before I do that lets take a look into basic switching technology in VMWare and why we need to do anything with it.
Effect of Virtualization
Virtual Ethernet Bridge (VEB)
In a virtual server environment the most common way to provide Virtual Machine (VM) switching connectivity is a Virtual Ethernet Bridge (VEB) in VMware this is called a vSwitch. A VEB is basically Hypervisor-Embedded Virtual Switch, a software that acts similar to a Layer 2 hardware switch providing inbound/outbound and inter-VM communication. A VEB works well to aggregate multiple VMs traffic across a set of links as well as provide frame delivery between VMs based on MAC address. Where a VEB is lacking is network management, monitoring and security. Typically a VEB is invisible and not configurable from the network teams perspective. Additionally any traffic handled by the VEB internally cannot be monitored or secured by the network team.
Furthermore, vSwitches do not do anything special to solve the problem of virtual machine mobility; the administrator must manually make sure that the vSwitches on both the originating and target VMware ESX hosts and the upstream physical access-layer ports are consistently configured so that the migration of the virtual machine can take place without breaking network policies or basic connectivity. In a virtualized server environment, in which virtual machine networking is performed through vSwitches, the configuration of physical access-layer ports as trunk ports is an unavoidable requirement if mobility needs to be supported.
To overcome the limitations of the embedded vSwitch, VMware and Cisco jointly developed the concept of a distributed virtual switch (DVS), which essentially decouples the control and data planes of the embedded switch and allows multiple, independent vSwitches (data planes) to be managed by a centralized management system (control plane.) VMware has branded its own implementation of DVS as the vNetwork Distributed Switch, and the control plane component is implemented within VMware vCenter. This approach effectively allows virtual machine administrators to move away from host-level network configuration and manage network connectivity at the VMware ESX cluster level.