Tag: Network

PowerShell NetAdpater Advanced Property

Hyper-V Network Virtualization NVGRE: No connection between VMs on different Hyper-V Hosts

I have worked on some project with Hyper-V Network Virtualization and NVGRE, and today I have seen an issue with Encapsulated Task Offloading on some HP Broadcom Network adapters.

 

Issue

I have Hyper-V Hosts running with 10GbE Broadcom Network Adapters (HP Ethernet 10Gb 2-port 530FLR-SFP+ Adapter) with driver version 7.8.52.0 (released in 2014). I have created a new VM Network based on Hyper-V Network Virtualization using NVGRE. VM1 is running on Host1 and VM2 is running on Host2. You can ping VM2 from VM1 but there is no other connection possible like SMB, RDP, HTTP or DNS. If you are using a NVGRE Gateway you can no even resolve DNS inside those VMs. If VM1 and VM2 are running on the same Hyper-V host everything between those VMs works fine.

Advanced Driver Settings

If you are using Server Core, which you should by the way, you can use the following command to check for those settings:

 
Get-NetAdapterAdvancedProperty -Name <NICNAME>

PowerShell NetAdpater Advanced Property

 

Resolution

The Broadcom Network adapters have a feature called Encapsulated Task Offloading which is enabled by default. If you disable Encapsulated Task Offloading everything works fine. You can disable it by using the following PowerShell cmdlet.

 
Set-NetAdapterEncapsulatedPacketTaskOffload -EncapsulatedPacketTaskOffloadEnabled $false -Name <NICNAME>

After that connection inside the VMs started to work immediately, no reboot needed.



Windows Server 2012 R2

Free Microsoft Cloud OS webinar series in March and April

In March and April I will present together with Microsoft and itnetx in webinars about the Microsoft Cloud OS. The webinars will be free and will cover an overview about the Microsoft Cloud OS. The Microsoft Cloud OS is the story behind the latest releases of Windows Server 2012 R2, Hyper-V System Center, Windows Azure Pack and Windows Azure. The webinar series will be split in three different sessions and will cover how you can plan, build and operate a Microsoft Cloud and how you can bring the Private & Public Cloud together to make use of a Hybrid Cloud model.


Webinar 1 - Microsoft Cloud OS: Overview

10:00
Presenter: Markus Erlacher, Marcel Zehner
ANMELDUNG

Webinar 2 - Microsoft Cloud OS: Planning & Architecture

25.März 2014, 09:00-10:00
Presenter: Thomas Maurer
ANMELDUNG

Webinar 1 - Microsoft Cloud OS: Operation

02.April 2014, 09:00-10:00
Presenter: Thomas Maurer, Philipp Witschi
ANMELDUNG

All three webinars will be free and will held in German.



Sort Network Adapter via PowerShell

Sort Windows Network Adapter by PCI Slot via PowerShell

If you work with Windows, Windows Server or Hyper-V you know that before Windows Server 2012 Windows named the network adapters randomly. This was a huge deal if you were trying to automate deployment of servers with multiple network adapters. And of course Hyper-V Servers normally have multiple network adapters. In Windows Server 2012 Microsoft had some different ways how this was fixed. First there is CDN (Consistent Device Naming) which allows hardware vendors to integrate the names so the OS can pick them up and the second one being the possibility of Hyper-V Converged Fabric which is basically making our lives easier by having less network adapters.

Well a lot of vendors have not integrated CDN or you have some old servers without CDN support. Back in May 2012 before the release of Windows Server 2012 I wrote a little Windows PowerShell script to sort network adapters in Windows Server 2008 R2 and Hyper-V Server 2008 R2 by using WMI (Configure Hyper-V Host Network Adapters Like A Boss). Now for a Cisco UCS project I rewrote some parts of the script to use Windows PowerShell in for Windows Server 2012, Windows Server 2012 R2 and Hyper-V.

First lets have a look how you can get the PCI slot information for network adapters, luckily there is now a PowerShell cmdlet for this.

 
Get-NetAdapterHardwareInfo

Now lets see how you can sort network adapters via Windows PowerShell.

 
Get-NetAdapterHardwareInfo | Sort-Object Bus,Function

This will get you a output like this:

Sort Network Adapter via PowerShell

Lets do a little loop to automatically name them:

$prefix = "NIC"
$netAdapters = Get-NetAdapterHardwareInfo | Sort-Object Bus,Function
$i = 0
 
foreach ($netAdapter in $netAdapters){
 
$interface = $netadapter | Get-NetAdapter
$old = $interface.Name
$newName = $prefix + $i
$interface | Rename-NetAdapter -NewName $newName
$i++
Write-Host "Rename" $old "to:" $newName
 
}

So this names all the network adapters to NIC1, NIC2, NIC3,…

So lets do a PowerShell function for this:

# ---------------------------------------------------------------------------------------------- #
# Powershell Sort-NetworkAdapter $Rev: 748 $
# (c) 2014 Thomas Maurer. All rights reserved.
# created by Thomas Maurer
# www.thomasmaurer.ch
# last Update by $Author: tmaurer $ on $Date: 2014-01-04 14:07:36 +0100 $
# ---------------------------------------------------------------------------------------------- #
 
function Sort-NetworkAdapter {
<# .SYNOPSIS This sorts and renames network adpaters by PCI slot .DESCRIPTION This sorts and renames network adapters sorted by PCI slot .EXAMPLE Sort-NetworkAdapter -prefix vnic -StartingNumber 0 This renames als NICs to vnic0, vnic1, vnic2,... .EXAMPLE Sort-NetworkAdapter -prefix nic -StartingNumber 1 This renames als NICs to nic1, nic2, nic3,... .PARAMETER prefix The Prefix of the network adapter name .PARAMETER StartingNumber The Number of the first network adapter #>
[CmdletBinding(SupportsShouldProcess=$True,ConfirmImpact='Low')]
param
(
[Parameter(Mandatory=$True,
ValueFromPipeline=$True,
ValueFromPipelineByPropertyName=$True,
HelpMessage='Which prefix you want to use?')]
[ValidateLength(1,20)]
[string]$prefix,
 
[Parameter(Mandatory=$False,
ValueFromPipeline=$True,
ValueFromPipelineByPropertyName=$True,
HelpMessage='Which Starting Number you want to use?')]
[int]$startingNumber = 1
)
 
begin {
write-verbose "Get netadpaters and sort them"
$netAdapters = Get-NetAdapterHardwareInfo | Sort-Object Bus,Function
}
 
process {
 
write-verbose "Rename netadapters"
 
foreach ($netAdapter in $netAdapters){
 
$interface = $netadapter | Get-NetAdapter
$old = $interface.Name
$newName = $prefix + $startingNumber
#$interface | Rename-NetAdapter -NewName $newName
$startingNumber++
Write-Host "Rename" $old "to:" $newName
 
}
}
}

Now you can run this by using Sort-NetworkAdapter for exmaple:

Sort-NetworkAdapter -prefix NIC

or

Sort-NetworkAdapter -prefix NIC -StartingNumber 0

You can also get this script from the Microsoft Technet Gallery or Script Center.



Hyper-V 2012 R2 Poster

TechNet Switzerland Event: From VMware to Hyper-V

On Tuesday, December 03 I will present together with Markus Erlacher, former Microsoft Switzerland TSP and now Managing Director at itnetx gmbh, on a free Microsoft Switzerland TechNet event. The topic this time will be why and how you migrate from VMware to a Microsoft Hyper-V and System Center environment. The event will cover an overview about Windows Server 2012 R2 Hyper-V and System Center 2012 R2 and all the Virtualization features you need in your environment. At the afternoon session we will also cover how you can migrate from VMware to Hyper-V so you can quickly enjoy the new Private Cloud solutions from Microsoft.

The event is free and in will be in the Microsoft Conference Center in Wallisellen Zürich. To join that event register on the Microsoft Event Website. The event will be in German and will no be streamed to the web.

Agenda

Tuesday, December 03

08:30 – Coffee
09:00 – Session 1 – Hyper-V Overview (Virtual Machines, Hyper-V Manager, Virtual Switch, VHDX format)
10:30 – Coffee Break
10:45 – Session 2 – Hyper-V Advanced Features (Hyper-V Networking and Storage, Hyper-V over SMB, Network Virtualization)
12:00 – Lunch
13:00 – Session 3 – Management (VM and Fabric Management with System Center Virtual Machine Manager, PowerShell and more…)
14:30 – Coffee Break
14:45 – Session 4 – VMware Migration (Migration from VMware to Hyper-V, Tools, Best practices, automation, real world example)
16:15 – End

More Information and registration

More information and registration on the Microsoft Event Website.



TechDays Basel 2013

TechDays 2013 – Fabric Management with Virtual Machine Manager Session Online

One day after I was presenting at the TechNet Conference in Berlin Germany I was also talking at the TechDays 2013 in Basel Switzerland. Microsoft has now published my session online on Channel9:

The Session is in German and shows how you can use System Center 2012 R2 – Virtual Machine Manager as your Datacenter Management Tool, to manage your Fabric like Storage, Network and Compute, how you can Pool Resources, create Tenants, Service Templates and about the Self-Service Portals like App Controller and Windows Azure Pack.



ConnectX-3 Pro NVGRE Offloading RDMA

Hyper-V Network Virtualization: NVGRE Offloading

At the moment I spend a lot of time working with Hyper-V Network Virtualization in Hyper-V, System Center Virtual Machine Manager and with the new Network Virtualization Gateway. I am also creating some architecture design references for hosting providers which are going to use Hyper-V Network Virtualization and SMB as storage. If you are going for any kind of Network Virtualization (Hyper-V Network Virtualization or VXLAN) you want to make sure you can offload NVGRE traffic to the network adapter.

Well the great news here is that the Mellanox ConnectX-3 Pro not only offers RDMA (RoCE), which is used for SMB Direct, the adapter also offers hardware offloads for NVGRE and VXLAN encapsulated traffic. This is great and should improve the performance of Network Virtualization dramatically.

NVGRE Offloading

More information on the Mellanox ConnectX-3 Pro:

ConnectX-3 Pro 10/40/56GbE adapter cards with hardware offload engines to Overlay Networks (“Tunneling”), provide the highest performing and most flexible interconnect solution for PCI Express Gen3 servers used in public and private clouds, enterprise data centers, and high performance computing.

Public and private cloud clustered databases, parallel processing, transactional services, and high-performance embedded I/O applications will achieve significant performance improvements resulting in reduced completion time and lower cost per operation.

Mellanox ConnectX-3 Pro

Benefits:

  • 10/40/56Gb/s connectivity for servers and storage
  • World-class cluster, network, and storage performance
  • Cutting edge performance in virtualized overlay networks (VXLAN and NVGRE)
  • Guaranteed bandwidth and low-latency services
  • I/O consolidation
  • Virtualization acceleration
  • Power efficient
  • Scales to tens-of-thousands of nodes

Key features:

  • 1us MPI ping latency
  • Up to 40/56GbE per port
  • Single- and Dual-Port options available
  • PCI Express 3.0 (up to 8GT/s)
  • CPU offload of transport operations
  • Application offload
  • Precision Clock Synchronization
  • HW Offloads for NVGRE and VXLAN encapsulated traffic
  • End-to-end QoS and congestion control
  • Hardware-based I/O virtualization
  • RoHS-R6

Virtualized Overlay Networks — Infrastructure as a Service (IaaS) cloud demands that data centers host and serve multiple tenants, each with their own isolated network domain over a shared network infrastructure. To achieve maximum efficiency, data center operators are creating overlay networks that carry traffic from individual Virtual Machines (VMs) in encapsulated formats such as NVGRE and VXLAN over a logical “tunnel,” thereby decoupling the workload’s location from its network address. Overlay Network architecture introduces an additional layer of packet processing at the hypervisor level, adding and removing protocol headers for the encapsulated traffic. The new encapsulation prevents many of the traditional “offloading” capabilities (e.g. checksum, TSO) from being performed at the NIC. ConnectX-3 Pro effectively addresses the increasing demand for an overlay network, enabling superior performance by introducing advanced NVGRE and VXLAN hardware offload engines that enable the traditional offloads to be performed on the encapsulated traffic. With ConnectX-3 Pro, data center operators can decouple the overlay network layer from the physical NIC performance, thus achieving native performance in the new network architecture.

I/O Virtualization — ConnectX-3 Pro SR-IOV technology provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VMs) within the server. I/O virtualization with ConnectX-3 Pro gives data center managers better server utilization while reducing cost, power, and cable complexity.

RDMA over Converged Ethernet — ConnectX-3 Pro utilizing IBTA RoCE technology delivers similar low-latency and high- performance over Ethernet networks. Leveraging Data Center Bridging capabilities, RoCE provides efficient low latency RDMA services over Layer 2 Ethernet. With link-level interoperability in existing Ethernet infrastructure, Network Administrators can leverage existing data center fabric management solutions.

Sockets Acceleration — Applications utilizing TCP/UDP/IP transport can achieve industry- leading throughput over 10/40/56GbE. The hardware-based stateless offload engines in ConnectX-3 Pro reduce the CPU overhead of IP packet transport. Socket acceleration software further increases performance for latency sensitive applications.

Storage Acceleration — A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks. Standard block and file access protocols can leverage Ethernet or RDMA for high-performance storage access.

Software Support – All Mellanox adapter cards are supported by Windows, Linux distributions, VMware, FreeBSD, Ubuntu, and Citrix XenServer. ConnectX-3 Pro adapters support OpenFabrics-based RDMA protocols and software and are compatible with configuration and management tools from OEMs and operating system vendors.

 



System Center 2012 R2 Virtual Machine Manager in IPAM

Connect IPAM with System Center 2012 R2 Virtual Machine Manager

In System Center 2012 SP1 Virtual Machine Manager you already had an option to feed information into your IPAM (IP Address Management which was introduced with Windows Server 2012).  In the R2 release of Windows Server 2012 R2 and System Center 2012 R2, Microsoft did enhance the connection between IPAM and Virtual Machine Manager (SCVMM). This was a really important step, because first of all not a lot of people have a real IP Address Management solution like IPAM, most of them are still using some crazy Excelsheets to mange IP Addresses. But if you are thinking about your Private Cloud or you are a Cloud Service Provider this just doesn’t work. IP Addresses these days change rapidly and especially when you do IaaS (Infrastructure as a Service) and you don’t have access inside the VM, because you don’t control it, you need some automated system. IPAM in Windows Server 2012 and Windows Server 2012 R2 is just perfect for that. It integrates in Active Directory, DNS, DHCP and more. With System Center 2012 R2, Virtual Machine Manager gets a perfect connection to IPAM. And if you have worked with Virtual Machine Manager 2012, 2012 SP1 you know that SCVMM knows about all your networks and even your customer networks. VMM is definitely the central management for your cloud environment which offers an end-to-end solution.

To connect and integrate IPAM into SCVMM 2012 R2, open the Virtual Machine Manager console and navigate Fabric and add a new Network Service.

Assign a name to the network service.

Virtual Machine Manager add Network Service

Choose Microsoft Windows Server IP Address Management

Choose Microsoft Windows Server IP Address Management

Enter credentials for the connection between Virtual Machine Manager and IPAM and enter the connection string which is basically the FQDN of the IPAM infrastructure.

Specify network service connection string

You can also validate the network service configuration provider, which will test the connection to the IPAM server.

Validate the network service configuration provider

After you have connected IPAM, the network definitions, VM Networks, Logical Networks, IP Pools and so on will show up in IPAM.

System Center 2012 R2 Virtual Machine Manager in IPAM

I hope more people will see the value of IPAM and the integration in System Center 2012 R2 Virtual Machine Manager.