Tag: NIC

Sort Network Adapter via PowerShell

Sort Windows Network Adapter by PCI Slot via PowerShell

If you work with Windows, Windows Server or Hyper-V you know that before Windows Server 2012 Windows named the network adapters randomly. This was a huge deal if you were trying to automate deployment of servers with multiple network adapters. And of course Hyper-V Servers normally have multiple network adapters. In Windows Server 2012 Microsoft had some different ways how this was fixed. First there is CDN (Consistent Device Naming) which allows hardware vendors to integrate the names so the OS can pick them up and the second one being the possibility of Hyper-V Converged Fabric which is basically making our lives easier by having less network adapters.

Well a lot of vendors have not integrated CDN or you have some old servers without CDN support. Back in May 2012 before the release of Windows Server 2012 I wrote a little Windows PowerShell script to sort network adapters in Windows Server 2008 R2 and Hyper-V Server 2008 R2 by using WMI (Configure Hyper-V Host Network Adapters Like A Boss). Now for a Cisco UCS project I rewrote some parts of the script to use Windows PowerShell in for Windows Server 2012, Windows Server 2012 R2 and Hyper-V.

First lets have a look how you can get the PCI slot information for network adapters, luckily there is now a PowerShell cmdlet for this.

Now lets see how you can sort network adapters via Windows PowerShell.

This will get you a output like this:

Sort Network Adapter via PowerShell

Lets do a little loop to automatically name them:

So this names all the network adapters to NIC1, NIC2, NIC3,…

So lets do a PowerShell function for this:

Now you can run this by using Sort-NetworkAdapter for exmaple:

or

You can also get this script from the Microsoft Technet Gallery or Script Center.



ConnectX-3 Pro NVGRE Offloading RDMA

Hyper-V Network Virtualization: NVGRE Offloading

At the moment I spend a lot of time working with Hyper-V Network Virtualization in Hyper-V, System Center Virtual Machine Manager and with the new Network Virtualization Gateway. I am also creating some architecture design references for hosting providers which are going to use Hyper-V Network Virtualization and SMB as storage. If you are going for any kind of Network Virtualization (Hyper-V Network Virtualization or VXLAN) you want to make sure you can offload NVGRE traffic to the network adapter.

Well the great news here is that the Mellanox ConnectX-3 Pro not only offers RDMA (RoCE), which is used for SMB Direct, the adapter also offers hardware offloads for NVGRE and VXLAN encapsulated traffic. This is great and should improve the performance of Network Virtualization dramatically.

NVGRE Offloading

More information on the Mellanox ConnectX-3 Pro:

ConnectX-3 Pro 10/40/56GbE adapter cards with hardware offload engines to Overlay Networks (“Tunneling”), provide the highest performing and most flexible interconnect solution for PCI Express Gen3 servers used in public and private clouds, enterprise data centers, and high performance computing.

Public and private cloud clustered databases, parallel processing, transactional services, and high-performance embedded I/O applications will achieve significant performance improvements resulting in reduced completion time and lower cost per operation.

Mellanox ConnectX-3 Pro

Benefits:

  • 10/40/56Gb/s connectivity for servers and storage
  • World-class cluster, network, and storage performance
  • Cutting edge performance in virtualized overlay networks (VXLAN and NVGRE)
  • Guaranteed bandwidth and low-latency services
  • I/O consolidation
  • Virtualization acceleration
  • Power efficient
  • Scales to tens-of-thousands of nodes

Key features:

  • 1us MPI ping latency
  • Up to 40/56GbE per port
  • Single- and Dual-Port options available
  • PCI Express 3.0 (up to 8GT/s)
  • CPU offload of transport operations
  • Application offload
  • Precision Clock Synchronization
  • HW Offloads for NVGRE and VXLAN encapsulated traffic
  • End-to-end QoS and congestion control
  • Hardware-based I/O virtualization
  • RoHS-R6

Virtualized Overlay Networks — Infrastructure as a Service (IaaS) cloud demands that data centers host and serve multiple tenants, each with their own isolated network domain over a shared network infrastructure. To achieve maximum efficiency, data center operators are creating overlay networks that carry traffic from individual Virtual Machines (VMs) in encapsulated formats such as NVGRE and VXLAN over a logical “tunnel,” thereby decoupling the workload’s location from its network address. Overlay Network architecture introduces an additional layer of packet processing at the hypervisor level, adding and removing protocol headers for the encapsulated traffic. The new encapsulation prevents many of the traditional “offloading” capabilities (e.g. checksum, TSO) from being performed at the NIC. ConnectX-3 Pro effectively addresses the increasing demand for an overlay network, enabling superior performance by introducing advanced NVGRE and VXLAN hardware offload engines that enable the traditional offloads to be performed on the encapsulated traffic. With ConnectX-3 Pro, data center operators can decouple the overlay network layer from the physical NIC performance, thus achieving native performance in the new network architecture.

I/O Virtualization — ConnectX-3 Pro SR-IOV technology provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VMs) within the server. I/O virtualization with ConnectX-3 Pro gives data center managers better server utilization while reducing cost, power, and cable complexity.

RDMA over Converged Ethernet — ConnectX-3 Pro utilizing IBTA RoCE technology delivers similar low-latency and high- performance over Ethernet networks. Leveraging Data Center Bridging capabilities, RoCE provides efficient low latency RDMA services over Layer 2 Ethernet. With link-level interoperability in existing Ethernet infrastructure, Network Administrators can leverage existing data center fabric management solutions.

Sockets Acceleration — Applications utilizing TCP/UDP/IP transport can achieve industry- leading throughput over 10/40/56GbE. The hardware-based stateless offload engines in ConnectX-3 Pro reduce the CPU overhead of IP packet transport. Socket acceleration software further increases performance for latency sensitive applications.

Storage Acceleration — A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks. Standard block and file access protocols can leverage Ethernet or RDMA for high-performance storage access.

Software Support – All Mellanox adapter cards are supported by Windows, Linux distributions, VMware, FreeBSD, Ubuntu, and Citrix XenServer. ConnectX-3 Pro adapters support OpenFabrics-based RDMA protocols and software and are compatible with configuration and management tools from OEMs and operating system vendors.

 



Hyper-V Converged Fabric with System Center 2012 SP1 – Virtual Machine Manager

System Center Logo

This blog post is a part of a series of blog posts about System Center 2012 Virtual Machine Manager, I am writing together with Michel Luescher (Consultant from Microsoft Switzerland).

Hyper-V Converged Fabric

Last year I already wrote a blog post about Windows Server 2012 Hyper-V Converged Fabric or Converged Networking. Hyper-V Converged Fabric in a simple way allows you to use network adapters for different type of traffic. In Windows Server 2008 R2 Hyper-V we didn’t really had this capabilities because the network teaming relied on 3rd party software and Hyper-V itself didn’t offered a mature QoS solution. In other words, we had to go with what I now would call a traditional Hyper-V host design.

Traditional Design

traditional Hyper-V host

Each dedicated Hyper-V network such as CSV communication or the Live Migration network used an own dedicated physical network interface. These different network interfaces could also be teamed with third party software, example with the software from HP, Broadcom or Intel. This design is still a good design in Windows Server 2012 but there are other configurations which are a lot more flexible.

In Windows Server 2012 you can get much more out of your network configuration. First of all NIC Teaming is now integrated and therefor out-of-the-box supported in Windows Server 2012. Another cool feature is the use of virtual network adapters in the Management OS (a.k.a. Parent Partition). This allows you to create a Hyper-V Hosts with all the necessary networks (Management, Live Migration, Cluster,…) by teaming just two or more physical adapters for a virtual switch and then create the additional virtual network adapters (vNICs) for the Hyper-V Management OS.



Windows Server 2012 Hyper-V Converged Fabric

Windows Server 2012 RC Logo

In Windows Server 2008 R2 we had some really simple configurations and best practices for Hyper-V and network configurations. The problem with this was, that this configurations were not really flexible. This had two main reasons, first NIC teaming wasn’t officially supported by Microsoft and secondly there was no possibility to create virtual network interfaces without third party solution.

Here is a example of a Hyper-V 2008 R2 host design which was used in a cluster setup.

Traditional Design

traditional Hyper-V Host

Each dedicated Hyper-V network such as CSV/Cluster communication or the Live Migration network used a own physical network interface. The different network interfaces could also be teamed with third party software from HP, Broadcom or Intel. This design is still a good design in Windows Server 2012 but there are other configurations which are a lot more flexible.

Microsoft MVP Adian Finn and Hans Vredevoort did a already some early work with Windows Server 2012 Converged Fabric and you should definitely read their blog posts.

In Windows Server 2012 you can get much more out of your network configuration. First of all NIC Teaming is now integrated and supported in Windows Server 2012 and another cool feature is the use of virtual network adapters in the Management OS (Host OS or Parent Partition). This allows you to create for example one of the following designs.

Virtual Switch and Dedicated Management Interfaces

Hyper-V Converged Fabric

This scenario has two teamed 10GbE adapter for Cluster and VM traffic.

Virtual Switch and Dedicated Teamed Management Interfaces

Hyper-V Converged Fabric

The same scenario with a teamed management interface.

Dedicated Virtual Switch for Management and VM Traffic

Hyper-V Converged Fabric

One Virtual Switch for Management and Cluster traffic and a dedicated switch for VM traffic.

One Virtual Switch for everything

Hyper-V Converged Fabric

This is may favorite design at the moment. Two 10GbE adapter as one team for Virtual Machine, Cluster traffic and management. It is a very flexible design and allows the two 10GbE adapters to be used very dynamic.

This design solutions will also be very interesting if you us SMB 3.0 as a storage for Hyper-V Virtual Machines.

FileServer and Hyper-V Cluster

 

There are at the moment not a lot of official information which designs will be unsupported and which will be supported. You can find some information about supported designs in the TechEd North America session WSV329 Architecting Private Clouds Using Windows Server 2012 by Yigal Edery and Joshua Adams.

Configuration

Now after you have seen these designs you may want to create such a configuration and want to know how you can do this. Not everything can be done via GUI you have to use your Windows PowerShell skills. In this scenario I use the design with four 10GbE network adapters 2 for iSCSI and to for my network connections.

  • Install the Hyper-V Role
  • Create NIC Teams
  • Create a Hyper-V Virtual Switch
  • Add new Virtual Network Adapters to the Management OS
  • Set VLANs of the Virtual Network Adapters
  • Set QoS Policies of the Virtual Network Adapters
  • Configure IP Addresses of the Virtual Network Adapters

Install Hyper-V Role

Before you can use the features of the Virtual Switch and can start create Virtual Network Adapters on the Management OS (Parent Partition) you have to install the Hyper-V role. You can do this via Server Manager or via Windows PowerShell.

Create NIC Teams

Now most of the time you will create a NIC Teaming for fault tolerance and load balancing. A team can be created over the Server Manager or PowerShell. Of course I prefer the Windows PowerShell. For a Team which will not only be used for Hyper-V Virtual Machines but also for Management OS traffic I use the TransportPorts as load balancing algorithm. If you use this team only for Virtual Machine traffic there is a algorithm called Hyper-V-Port. The Teaming Mode of course depends on your configuration.

NIC Teaming

 

Create the Virtual Switch

After the team is created you have to create a new Virtual Switch. We also define the DefaultFlowMinimumBandwidthWeight to be set to 20.

VM Switch

 

After you have created the Hyper-V Virtual Switch or VM Switch you will find this switch also in the Hyper-V Manager.

Hyper-V Virtual Switch

 Create Virtual Network Adapters for the Management OS

After you have created your Hyper-V Virtual Switch you can now start adding VM Network Adapters to this Virtual Switch. We also configure the VLAN ID and the QoS policy settings.

VMNetworkAdapter ManagementOS

 

Your new configuration will now look like this:

Network Connections

As you can see the name of the new Hyper-V Virtual Ethernet Adapter is vEthernet (NetworkAdapaterName). This will be important for automation tasks or configuring IP addresses via Windows PowerShell.

Set IP Addresses

Some months ago I wrote two blog posts, the first was how to configure you Hyper-V host network adapters like a boss and the second one was how to replace the netsh command with Windows PowerShell. Now using Windows PowerShell to configure IP addresses will save you a lot of time.

 

There is still a lot more about Windows Server 2012 Hyper-V Converged Fabric in the future, but I hope this post will give you a quick insight into some new features of Windows Server 2012 and Hyper-V.



Windows Server 2012 NIC Naming

Windows Server 2012 RC Logo

Some weeks ago I wrote a blog post how you can configure Network Adapters on a Hyper-V host via PowerShell. I mentioned that the NICs in Windows Server 2008 R2 are always named differently. Now I have some great news in Windows Server 2012 Release Candidate this has changed.

Windows Server 2012 NICs Server Manager

Some hours ago I installed one of my Cisco UCS C200 servers with the Windows Server 2012 Release Candidate. And I realized the new naming of the network adapters.

Windows Server 2012 NICs

Now I run my Get-NICInformation.ps1 PowerShell script to get some more information about this.

Windows Server 2012 NICs PCI Slot order

It looks like the new naming is done by PCI slot order because I don’t think Cisco supports Consistent Device Naming yet. Anyway this is great news for all the Hyper-V guys out there.

If you wonder, the order in my case is Ethernet 1 Port 1 to Port 4 are the 4 Quadport Intel NIC and Ethernet 2 Port 1 and Port 2 are the build-in ports.

Two more things, first the PowerShell script which I used to configure the network adapters from a XML file stills works fine. To check this and make the screenshots for this blog post I had to install my Hyper-V Hosts twice, so please share this post 😉

 



Configure Hyper-V Host Network Adapters Like A Boss

Hyper-V R2 SP1

If you are working a lot with Hyper-V and Hyper-V Clustering you know that something that takes a lot of time is configure the Hyper-V Host Network Adapters. First because most of the time you have a lot of NICs build into your host for the different Hyper-V and Cluster networks and secondly Windows names the NICs in a random way and this makes it hard to find out which network card is the right one. Maybe your first NIC on your Hyper-V Host01 is called “Local Area Connection 2” and on your second Hyper-V Host with the same hardware configuration the “same” NIC is called “Local Area Connection 3”. One of the possibilities to find out which network card is the right one is to check the MAC address of the network adapter. But for this you still have to know which MAC address is on which network adapter port.

Another way to do it is to plug in the network cables one by one. So you can see which port is active and then you can rename the network adapter. Now some times this one is one of the only solutions, but it takes a lot of time to do this on every host. And if you build Clusters up to 16 Hosts you really don’t want to do that.

Now there is a solution, you can sort your NICs by PCI bus and PCI slot. Maarten Wijsman did a blog post how you can do this on the Hyper-V.nu blog. With this knowledge you can start to automate this very easy.

networkcable

I have created two Windows PowerShell scripts which make my life a lot easier.

First I configured the first Hyper-V host and renamed all the Network adapters. If you have a GUI server you could do that via GUI or if you have a Windows Server Core or Hyper-V Server you can do this via netsh.

If I have done that I use my  Windows PowerShell script called Get-NICInformation.ps1 to get the information about the network adapters.

get-nicinformation

This gives me a lot of information about the NICs in my first hosts. But the important part is the order of the NICs. In my example I know that the order is this:

  • Management
  • VMNet
  • CSV
  • LiveMigration
  • iSCSI01
  • iSCSI02

Since my other hosts have the same hardware they will have the same PCI Bus order.

For the next step I go to my second host. There I have my other Windows PowerShell script (Set-IPAddressfromXML) and a XML file (networkconfig.xml).

dir

I edit the networkconfig.xml file with the correct network information. Important here are the id=”” parameters. They are showing the order of the NICs so with Get-NICInformation I can see the Management interface is the first one, so it gets id=”1″, VMNET is the second one it gets id=”2″ and so on. You also set the correct IP Address information for the second host. Most of the time you just have to change the last number.

You can also set non static IP Addresses (DHCP), in my case I did this for the VMNET adapter which will be used by the Hyper-V Virtual Switch and does not need a IP address.

networkconfigxml

After you have done this, you can now simply run the Set-IPAddressfromXML script. This will use the Information from the networkconfig.xml file and will rename all network adapters and will set the correct IP addresses.

set-ipaddressfromxml

 

I can now copy the Set-IPAddressfromXML.ps1 and the networkconfig.xml to each Hyper-V hosts and edit the IP Addresses in the xml file, run the PowerShell file and I am done.

Lets recap:

  1. Rename the NICs of the first hosts
  2. Run the Get-NICInformation.ps1 on the first host and check the NIC order
  3. Edit the networkconfig.xml on the second hosts with the right order of the NICs
  4. Run the Set-IPAddressfromXML.ps1
  5. Do this for all Hyper-V Hosts.

I hope this will make life easier 🙂

You can download the Scripts from my Skydrive

Some other things:

  • I have tested this with Windows Server 2008 R2, Hyper-V Server R2, Windows Server 8 beta, Hyper-V Server 8 beta
  • It works for both because it’s not done with PowerShell v3, maybe I will update it to get it even better.
  • I do not support this script, and you are running it on your own risk.


Windows Server 2012 – CDN (Consistent Device Naming)

Windows Server 8

There is a new feature coming with Windows Server 8 called Consistent Device Naming (CDN) which should make life in the datacenter a lot easier.

CDN

It allows to hardware vendors to consistently name NICs in the BIOS which means the Windows Server 8 can read this information and name NICs the same.

That means that the name of the NICs on the chassis can be the same name on the NICs in the OS.

CDN Consistent Device Naming

If you have ever worked with Hyper-V Clusters you are going love this feature.