Tag: Network Virtualization

ConnectX-3 Pro NVGRE Offloading RDMA

Hyper-V Network Virtualization: NVGRE Offloading

At the moment I spend a lot of time working with Hyper-V Network Virtualization in Hyper-V, System Center Virtual Machine Manager and with the new Network Virtualization Gateway. I am also creating some architecture design references for hosting providers which are going to use Hyper-V Network Virtualization and SMB as storage. If you are going for any kind of Network Virtualization (Hyper-V Network Virtualization or VXLAN) you want to make sure you can offload NVGRE traffic to the network adapter.

Well the great news here is that the Mellanox ConnectX-3 Pro not only offers RDMA (RoCE), which is used for SMB Direct, the adapter also offers hardware offloads for NVGRE and VXLAN encapsulated traffic. This is great and should improve the performance of Network Virtualization dramatically.

NVGRE Offloading

More information on the Mellanox ConnectX-3 Pro:

ConnectX-3 Pro 10/40/56GbE adapter cards with hardware offload engines to Overlay Networks (“Tunneling”), provide the highest performing and most flexible interconnect solution for PCI Express Gen3 servers used in public and private clouds, enterprise data centers, and high performance computing.

Public and private cloud clustered databases, parallel processing, transactional services, and high-performance embedded I/O applications will achieve significant performance improvements resulting in reduced completion time and lower cost per operation.

Mellanox ConnectX-3 Pro

Benefits:

  • 10/40/56Gb/s connectivity for servers and storage
  • World-class cluster, network, and storage performance
  • Cutting edge performance in virtualized overlay networks (VXLAN and NVGRE)
  • Guaranteed bandwidth and low-latency services
  • I/O consolidation
  • Virtualization acceleration
  • Power efficient
  • Scales to tens-of-thousands of nodes

Key features:

  • 1us MPI ping latency
  • Up to 40/56GbE per port
  • Single- and Dual-Port options available
  • PCI Express 3.0 (up to 8GT/s)
  • CPU offload of transport operations
  • Application offload
  • Precision Clock Synchronization
  • HW Offloads for NVGRE and VXLAN encapsulated traffic
  • End-to-end QoS and congestion control
  • Hardware-based I/O virtualization
  • RoHS-R6

Virtualized Overlay Networks — Infrastructure as a Service (IaaS) cloud demands that data centers host and serve multiple tenants, each with their own isolated network domain over a shared network infrastructure. To achieve maximum efficiency, data center operators are creating overlay networks that carry traffic from individual Virtual Machines (VMs) in encapsulated formats such as NVGRE and VXLAN over a logical “tunnel,” thereby decoupling the workload’s location from its network address. Overlay Network architecture introduces an additional layer of packet processing at the hypervisor level, adding and removing protocol headers for the encapsulated traffic. The new encapsulation prevents many of the traditional “offloading” capabilities (e.g. checksum, TSO) from being performed at the NIC. ConnectX-3 Pro effectively addresses the increasing demand for an overlay network, enabling superior performance by introducing advanced NVGRE and VXLAN hardware offload engines that enable the traditional offloads to be performed on the encapsulated traffic. With ConnectX-3 Pro, data center operators can decouple the overlay network layer from the physical NIC performance, thus achieving native performance in the new network architecture.

I/O Virtualization — ConnectX-3 Pro SR-IOV technology provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VMs) within the server. I/O virtualization with ConnectX-3 Pro gives data center managers better server utilization while reducing cost, power, and cable complexity.

RDMA over Converged Ethernet — ConnectX-3 Pro utilizing IBTA RoCE technology delivers similar low-latency and high- performance over Ethernet networks. Leveraging Data Center Bridging capabilities, RoCE provides efficient low latency RDMA services over Layer 2 Ethernet. With link-level interoperability in existing Ethernet infrastructure, Network Administrators can leverage existing data center fabric management solutions.

Sockets Acceleration — Applications utilizing TCP/UDP/IP transport can achieve industry- leading throughput over 10/40/56GbE. The hardware-based stateless offload engines in ConnectX-3 Pro reduce the CPU overhead of IP packet transport. Socket acceleration software further increases performance for latency sensitive applications.

Storage Acceleration — A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks. Standard block and file access protocols can leverage Ethernet or RDMA for high-performance storage access.

Software Support – All Mellanox adapter cards are supported by Windows, Linux distributions, VMware, FreeBSD, Ubuntu, and Citrix XenServer. ConnectX-3 Pro adapters support OpenFabrics-based RDMA protocols and software and are compatible with configuration and management tools from OEMs and operating system vendors.

 



Hyper-V Network Virtualization Gateway VM Network Connectivity

Connect Microsoft Hyper-V Network Virtualization Gateway in System Center Virtual Machine Manager

If you are using Hyper-V Network Visualization you can use the Microsoft Hyper-V Network Virtualization Gateway for your Virtual Machines to leave the VM Networks or to connected inside these Virtual Networks. If you have deployed the Hyper-V Network Virtualization Gateway, which is basically a virtual machine which you can deploy with a Service Template inside SCVMM, you have to connect this Gateway inside System Center Virtual Machine Manager.

Microsoft Hyper-V Network Virtualization Gateway Service Template

First open the SCVMM console and navigate to Fabric and add a new Network Service. Enter a name fort the Network Service.

Microsoft Hyper-V Network Virtualization Gateway in Virtual Machine Manager

Choose the Microsoft Windows Server Gateway as a model.

Microsoft Windows Server Gateway

Setup the Run As account for the connection and enter the connection string for the Hyper-V Network Virtualization Gateway

Hyper-V Network Virtualization Gateway connection string

If your Gateway is not domain joined you have to use Certificate for the communication between Virtual Machine Manager and the Hyper-V Network Virtualization Gateway.

After that you can validate and test the network service configuration of the Gateway.

Validate Hyper-V Network Virtualization Gateway

After you have added the Gateway you have to do a final configuration step in the properties of the Gateway.

Hyper-V Network Virtualization Gateway

You have to map the network adapters for the frontend (Internet, Corpnet) with the NIC and the back end which would be the tenant network to the specific NIC.

Hyper-V Network Virtualization Gateway Configuration

 

After that you can now finally add connectivity to your VM Networks which are using Hyper-V Network Virtualization, such as Site-to-Site VPN, Routing or NAT.

Hyper-V Network Virtualization Gateway VM Network Connectivity

 



Iron Networks Announces Windows Server 2012 Hyper-V Network Virtualization Gateway Appliance

Windows Server 2012 Logo

Finally some months after the launch of Windows Server 2012 and System Center 2012 SP1, Iron Networks announces Windows Server 2012 Network Virtualization (NVGRE) Gateway Appliance for System Center 2012 SP1 Virtual machine Manager at the Microsoft Management Summit 2013. The Network Virtualization Gateway Appliance allows you to connect your Software Defended Networks (SDN) which you have created with Windows Server 2012 Network Virtualization to physical hardware or other networks.

NetworkVirtualization

Windows  Server2012 Hyper-V Network Virtualization provides virtual networks to virtual  machines, similarly to how server virtualization (hypervisor) provides virtual  machines to the operating system. Network virtualization decouples and isolates  virtual networks from the physical network infrastructure and removes the  constraints of VLAN and hierarchical IP address assignment from virtual machine  provisioning. This flexibility makes it easy for customers to move workloads to IaaS clouds and adds efficiency for hosters and datacenter administrators to  manage their infrastructure, while maintaining the necessary multi-tenant  isolation, security requirements, and supporting overlapping virtual machine IP  addresses.

“Microsoft Windows Server 2012 Hyper-V  Network Virtualization provides greater freedom for workload placements,” said  Brian Hillger, director, Server and Tools Marketing, Microsoft. “Virtual  machine workload placement is no longer limited by the IP address assignment or  VLAN isolation requirements of the physical network because it is enforced  within Hyper-V hosts, based on software-defined, multitenant virtualization  policies.”

 

image-hybrid-cloud_DiagramB_s

You can get more information about the Iron Networks Announcement here: Iron Networks Announces Windows Server 2012 Network Virtualization Gateway Appliance

 



SCVMM 2012 SP1 Error: Windows network virtualization is not enabled on a host NIC available for placement

System Center Logo

If you are running Windows Server 2012 Hyper-V hosts and you are managing them with System Center Virtual Machine Manager 2012 SP1 and you are running VM Networks with Network Virtualization you can get the following error when you try to deploy a new Virtual Machine to the Hyper-V host.

“Windows network virtualization is not enabled on a host NIC available for placement”

Windows network virtualization is not enabled on a host NIC available for placement

This happens if:

  • You create a Logical Network with Network Virtualization enabled
  • You add the Logical Network to a Host Adapter or a Logical Switch on the Hyper-V host
  • You create a Isolated VM Network
  • Deploy a new Virtual Machine with the VM Network.

Resolution:

  • Enable the Windows Network Virtualization Filter driver on the Hyper-V host
Windows Network Virtualization Filter driver

Windows Network Virtualization Filter driver

If you are running this on a Team you have to enable the Windows Network Virtualization Filter driver on the Network Adapter Team.