Yesterday I posted about Cisco UCS supporting RDMA (SMB Direct) with firmware version 2.2(4b)B. Walter Dey, former Cisco Distinguished Engineer at Cisco informed me not only about the RDMA feature he also showed me that Cisco UCS now supports Consistent Device Naming which was introduced with Windows Server 2012. Consistent Device Naming (CDN) allows Ethernet interfaces to be named in a consistent manner. This makes Ethernet interface names more persistent when adapter or other configuration changes are made. To use CDN in Cisco UCS you need to run firmware version 2.2(4b)B. This will help to make it a lot easier to identify network interfaces used with Windows Server 2012 R2 and Hyper-V.
Tag: Windows Server 2012
Last updated by Thomas Maurer at .
As you may know we use SMB as the storage protocol for several Hyper-V deployments using Scale-Out File Server and Storage Spaces which adds a lot value to your Hyper-V deployments. To boost performance Microsoft is using RDMA or SMB Direct to accelerate Storage network performance.
RDMA over Converged Ethernet (RoCE) allows direct memory access over an Ethernet network. RoCE is a link layer protocol, and hence, it allows communication between any two hosts in the same Ethernet broadcast domain. RoCE delivers superior performance compared to traditional network socket implementations because of lower latency, lower CPU utilization and higher utilization of network bandwidth. Windows Server 2012 and later versions use RDMA for accelerating and improving the performance of SMB file sharing traffic and Live Migration. If you need to know more about RDMA or SMB Direct checkout my blog post: Hyper-V over SMB: SMB Direct
With Cisco UCS Manager Release 2.2(4), Cisco finally supports RoCE for SMB Direct. It sends additional configuration information to the adapter while creating or modifying an Ethernet adapter policy.
Guidelines and Limitations for SMB Direct with RoCE
- SMB Direct with RoCE is supported only on Windows Server 2012 R2.
- SMB Direct with RoCE is supported only with Cisco UCS VIC 1340 and 1380 adapters.
- Cisco UCS Manager does not support more than 4 RoCE-enabled vNICs per adapter.
- Cisco UCS Manager does not support RoCE with NVGRE, VXLAN, NetFlow, VMQ, or usNIC.
- You can not use Windows Server NIC Teaming together with RMDA enabled adapters in Windows Server 2012 and Windows Server 2012 R2 or you will lose RDMA feature on these adapters.
- Maximum number of queue pairs per adapter is 8192.
- Maximum number of memory regions per adapter is 524288.
- If you do not disable RoCE before downgrading Cisco UCS Manager from Release 2.2(4), downgrade will fail.
Checkout my post about Hyper-V over SMB:
I know this is nothing new but since I had to mention the Whitepaper on NIC Teaming and the use of SMB Multichannel as well as the configuration with System Center Virtual Machine Manager in a couple of meetings I want to make sure you have an overview on my blog.
Windows Server NIC Teaming was introduced in Windows Server 2012 (Codename Windows Server 8). NIC teaming, also known as Load Balancing/Failover (LBFO), allows multiple network adapters to be placed into a team for the purposes of bandwidth aggregation, and/or traffic failover to maintain connectivity in the event of a network component failure.
NIC Teaming Recommendation
For design the default and recommended configuration is using NIC Teaming with Switch Independent and Dynamic and in some scenarios where you have the write switches you can use LACP and Dynamic.
Download Windows Server 2012 R2 NIC Teaming (LBFO) Deployment and Management Whitepaper
This guide describes how to deploy and manage NIC Teaming with Windows Server 2012 R2.
You can find the Whitepaper on Windows Server 2012 R2 NIC Teaming (LBFO) Deployment and Management in the Microsoft Download Center.
If you use Hyper-V over SMB you can use SMB Multichannel as a even better mode to distribute SMB 3.0 traffic across different network adapters or you could use a mix of both, NIC Teaming and SMB Multichannel. Check out my blog post about Hyper-V over SMB: SMB Multichannel, SMB Direct (RDMA) and Scale-Out File Server and Storage Spaces.
Configuration with System Center Virtual Machine Manager
Some months back I also wrote some blog post about configuration of Hyper-V Converged Networking and System Center Virtual Machine Manager. This guide will help you to understand how you deploy NIC Teaming with System Center Virtual Machine Manager using the Logical Switch on Hyper-V hosts.
Sometimes I just need my blog as a reminder or a database to find something in a few months so this is exactly one of this blog posts. Microsoft has a TechNet article where they describe the best practices for Linux VMs running on Hyper-V 2012 or Hyper-V 2012 R2. The article is a list of recommendations for running Linux virtual machine on Hyper-V.
Right now they have 4 recommendations on the list (Source Microsoft TechNet):
- Use static MAC addresses with failover clustering.
- Use Hyper-V-specific network adapters, not the legacy network adapter.
- Use I/O scheduler NOOP for better disk I/O performance.
- Add “numa=off” if the Linux virtual machine has more than 7 virtual processors or more than 30 GB RAM.
A long time ago I wrote a blog post on how you can use System Center Virtual Machine Manager Bare-Metal Deployment to deploy new Hyper-V hosts. Normally this works fine but if you have newer hardware, your Windows Server Image does may not include the network adapter drivers. Now this isn’t a huge problem since you can mount and insert the drivers in the VHD or VHDX file for the Windows Server Hyper-V image. But if you forget to update the WinPE file from Virtual Machine Manager your deployment will fails, since the WinPE image has not network drivers included it won’t able to connect to the VMM Library or any other server.
You will end up in the following error and your deployment will timeout on the following screen:
“Synchronizing Time with Server”
If you check the IP configuration with ipconfig you will see that there are no network adapters available. This means you have to update your SCVMM WinPE image.
First of all you have to copy the SCVMM WinPE image. You can find this wim file on your WDS (Windows Deployment) PXE Server in the following location E:\RemoteInstall\DCMgr\Boot\WIndows\Images (Probably your setup has another drive letter.
I copied this file to the C:\temp folder on my System Center Virtual Machine Manager server. I also copied the extracted drivers to the C:\Drivers folder.
After you have done this, you can use Greg Casanza’s (Microsoft) SCVMM Windows PE driver injection script, which will add the drivers to the WinPE Image (Boot.wim) and will publish this new boot.wim to all your WDS servers. I also rewrote the script I got from using drivers in the VMM Library to use drivers from a folder.
$mountdir = "c:\mount"
$winpeimage = "c:\temp\boot.wim"
$winpeimagetemp = $winpeimage + ".tmp"
$path = "C:\Drivers"
copy $winpeimage $winpeimagetemp
dism /mount-wim /wimfile:$winpeimagetemp /index:1 /mountdir:$mountdir
dism /image:$mountdir /add-driver /driver:$path
Dism /Unmount-Wim /MountDir:$mountdir /Commit
publish-scwindowspe -path $winpeimagetemp
This will add the drivers to the Boot.wim file and publish it to the WDS servers.
After this is done the Boot.wim will work with your new drivers.
This is not something new to the most of you PowerShell guys out there, but still there are a lot of IT Pros which do not know about this. Sometimes we have to do some remote troubleshooting without having access to the system itself. The thing you can do is to let the customer send you some screenshots but that doesn’t really show everything and maybe you have to contact the customer like 100 times to get the right information. A better solution is to let the customer to run a PowerShell command or script and send you the output. But even a text file or screenshot of the PowerShell output is not the best solution. If you get a lot of text in a TXT file it is hard to sort it and maybe there are some information missing because the txt output does not include all information of the PowerShell object.
I have started to use a simple method to export PowerShell objects to a XML file and import the object on another system. This can be done by the PowerShell cmdlets Export-Clixml and Import-Clixml.
What I do is, I tell the customer to run the following command to generate a XML with the PowerShell objects about his disks for example.
Get-Disk | Export-Clixml C:\temp\Servername_disks.xml
After I got this XML file, I can import it here on my local system and can work with it as I would be in front of the customer system.
$disks = Import-Clixml C:\mylocaltemp\Servername_disks.xml
As I said, this is nothing new but this can save you and your customer some time. Of course this works with other objects not just disks 😉 For example you can get Cluster Configurations, Hyper-V Virtual Switch Configurations and much more.
In Windows Server 2012 Microsoft introduced CSV Cache for Windows Server 2012 Hyper-V and Scale-Out File Server Clusters. The CSV Block Cache is basically a RAM cache which allows you to cache read IOPS in the Memory of the Hyper-V or the Scale-Out File Server Cluster nodes. In Windows Server 2012 you had to set the CSV Block Cache and enable it on every CSV volume. In Windows Server 2012 R2 CSV Block cache is by default enabled for every CSV volume but the size of the CSV Cache is set to zero, which means the only thing you have to do is to set the size of the cache.
# Get CSV Block Cache Size
# Set CSV Block Cache Size to 512MB
(Get-Cluster).BlockCacheSize = 512
Microsoft recommends using 512MB as cache on a Hyper-V host. On a Scale-Out File Server node, things are a little bit different. In Windows Server 2012 Microsoft allowed you to use a cache size up to 20% of the server, in Windows Server 2012 R2 Microsoft changed this, so you can now finally use up to 80% of the RAM of a Scale-Out File Server Node.
Back in the days of Windows Server 2012 I made a little benchmark of CSV Cache on my Hyper-V hosts.
My Name is Thomas Maurer. Microsoft MVP for Hyper-V. Work as a Cloud Architect for itnetx gmbh, a consulting and engineering company located in Bern/Switzerland. I am focused on Microsoft Technologies, especially Microsoft Cloud Solutions based Microsoft System Center, Microsoft Virtualization and Windows Azure.
TagsApple Azure Cisco Cisco UCS Cloud Cluster Datacenter ESXi Event Hardware HP Hyper-V Microsoft MVP Pictures PowerShell Private Cloud SCVMM Server Storage Surface Switzerland System Center System Center 2012 System Center 2012 R2 System Center 2012 SP1 UCS Virtual Home Virtualization Virtual Machine Virtual Machine Manager VM VMM VMware Windows Windows 7 Windows 8 Windows Azure Windows Phone Windows Powershell Windows Server Windows Server 8 Windows Server 2008 R2 Windows Server 2012 Windows Server 2012 R2