Last updated by at .

  • What's new in Hyper-V 2016
  • Microsoft Azure

Tag: Windows Server 2012

Powershell

Move multipe Hyper-V Virtual Machines with Live Storage Migration via Windows PowerShell

Well I was working in on a Private Cloud Deployment where we had some temporary storage for our Hyper-V Virtual Machines and after we got the right storage ready, which was btw a Windows Server Scale-Out File Server Cluster running with Storage Spaces and DataON JBOD chassis, we had to migrate the storage of all virtual machines running on our Hyper-V hosts. Since Windows Server 2012 offers Live Storage Migration which allows us to move the Virtual Machine to a different storage location without downtime we would use that. But if you have to move around 20 virtual machines you think twice if you want to move that via the Hyper-V Manager GUI or Windows PowerShell.

Here is a pretty simple PowerShell foreach loop which moves the storage of all virtual machines running on the Hyper-V host.



CDN Consistent Device Naming

Cisco UCS supports Consistent Device Naming (CDN)

Yesterday I posted about Cisco UCS supporting RDMA (SMB Direct) with firmware version 2.2(4b)B. Walter Dey, former Cisco Distinguished Engineer at Cisco informed me not only about the RDMA feature he also showed me that Cisco UCS now supports Consistent Device Naming which was introduced with Windows Server 2012. Consistent Device Naming (CDN) allows Ethernet interfaces to be named in a consistent manner. This makes Ethernet interface names more persistent when adapter or other configuration changes are made. To use CDN in Cisco UCS you need to run firmware version 2.2(4b)B. This will help to make it a lot easier to identify network interfaces used with Windows Server 2012 R2 and Hyper-V.



Cisco UCS Hardware

Cisco UCS supports RoCE for Microsoft SMB Direct

As you may know we use SMB as the storage protocol for several Hyper-V deployments using Scale-Out File Server and Storage Spaces which adds a lot value to your Hyper-V deployments. To boost performance Microsoft is using RDMA or SMB Direct to accelerate Storage network performance.

RDMA over Converged Ethernet (RoCE) allows direct memory access over an Ethernet network. RoCE is a link layer protocol, and hence, it allows communication between any two hosts in the same Ethernet broadcast domain. RoCE delivers superior performance compared to traditional network socket implementations because of lower latency, lower CPU utilization and higher utilization of network bandwidth. Windows Server 2012 and later versions use RDMA for accelerating and improving the performance of SMB file sharing traffic and Live Migration. If you need to know more about RDMA or SMB Direct checkout my blog post: Hyper-V over SMB: SMB Direct

With Cisco UCS Manager Release 2.2(4), Cisco finally supports RoCE for SMB Direct. It sends additional configuration information to the adapter while creating or modifying an Ethernet adapter policy.

Guidelines and Limitations for SMB Direct with RoCE

  • SMB Direct with RoCE is supported only on Windows Server 2012 R2.
  • SMB Direct with RoCE is supported only with Cisco UCS VIC 1340 and 1380 adapters.
  • Cisco UCS Manager does not support more than 4 RoCE-enabled vNICs per adapter.
  • Cisco UCS Manager does not support RoCE with NVGRE, VXLAN, NetFlow, VMQ, or usNIC.
  • You can not use Windows Server NIC Teaming together with RMDA enabled adapters in Windows Server 2012 and Windows Server 2012 R2 or you will lose RDMA feature on these adapters.
  • Maximum number of queue pairs per adapter is 8192.
  • Maximum number of memory regions per adapter is 524288.
  • If you do not disable RoCE before downgrading Cisco UCS Manager from Release 2.2(4), downgrade will fail.

Checkout my post about Hyper-V over SMB:



NIC Teaming

Overview on Windows Server and Hyper-V 2012 R2 NIC Teaming and SMB Multichannel

I know this is nothing new but since I had to mention the Whitepaper on NIC Teaming and the use of SMB Multichannel as well as the configuration with System Center Virtual Machine Manager in a couple of meetings I want to make sure you have an overview on my blog.

NIC Teaming

Windows Server NIC Teaming was introduced in Windows Server 2012 (Codename Windows Server 8). NIC teaming, also known as Load Balancing/Failover (LBFO), allows multiple network adapters to be placed into a team for the purposes of bandwidth aggregation, and/or traffic failover to maintain connectivity in the event of a network component failure.

NIC Teaming Recommendation

For design the default and recommended configuration is using NIC Teaming with Switch Independent and Dynamic and in some scenarios where you have the write switches you can use LACP and Dynamic.

Download Windows Server 2012 R2 NIC Teaming (LBFO) Deployment and Management Whitepaper

This guide describes how to deploy and manage NIC Teaming with Windows Server 2012 R2.

You can find the Whitepaper on Windows Server 2012 R2 NIC Teaming (LBFO) Deployment and Management in the Microsoft Download Center.

SMB Multichannel

Hyper-V over SMB Multichannel

If you use Hyper-V over SMB you can use SMB Multichannel as a even better mode to distribute SMB 3.0 traffic across different network adapters or you could use a mix of both, NIC Teaming and SMB Multichannel. Check out my blog post about Hyper-V over SMB: SMB Multichannel, SMB Direct (RDMA) and Scale-Out File Server and Storage Spaces.

Configuration with System Center Virtual Machine Manager

Logical Switch

Some months back I also wrote some blog post about configuration of Hyper-V Converged Networking and System Center Virtual Machine Manager. This guide will help you to understand how you deploy NIC Teaming with System Center Virtual Machine Manager using the Logical Switch on Hyper-V hosts.



centos hyper-v

Best Practices for running Linux on Hyper-V

Sometimes I just need my blog as a reminder or a database to find something in a few months so this is exactly one of this blog posts. Microsoft has a TechNet article where they describe the best practices for Linux VMs running on Hyper-V 2012 or Hyper-V 2012 R2. The article is a list of recommendations for running Linux virtual machine on Hyper-V.

Right now they have 4 recommendations on the list (Source Microsoft TechNet):

  • Use static MAC addresses with failover clustering.
  • Use Hyper-V-specific network adapters, not the legacy network adapter.
  • Use I/O scheduler NOOP for better disk I/O performance.
  • Add “numa=off” if the Linux virtual machine has more than 7 virtual processors or more than 30 GB RAM.

 



SCVMM Bare-Metal Fails

Add drivers to SCVMM Bare-Metal WinPE Image

A long time ago I wrote a blog post on how you can use System Center Virtual Machine Manager Bare-Metal Deployment to deploy new Hyper-V hosts. Normally this works fine but if you have newer hardware, your Windows Server Image does may not include the network adapter drivers. Now this isn’t a huge problem since you can mount and insert the drivers in the VHD or VHDX file for the Windows Server Hyper-V image. But if you forget to update the WinPE file from Virtual Machine Manager your deployment will fails, since the WinPE image has not network drivers included it won’t able to connect to the VMM Library or any other server.

You will end up in the following error and your deployment will timeout on the following screen:

“Synchronizing Time with Server”

SCVMM Bare-Metal Fails

If you check the IP configuration with ipconfig you will see that there are no network adapters available. This means you have to update your SCVMM WinPE image.

First of all you have to copy the SCVMM WinPE image. You can find this wim file on your WDS (Windows Deployment) PXE Server in the following location E:\RemoteInstall\DCMgr\Boot\WIndows\Images (Probably your setup has another drive letter.

WDS SCVMM Boot WIM

I copied this file to the C:\temp folder on my System Center Virtual Machine Manager server. I also copied the extracted drivers to the C:\Drivers folder.

After you have done this, you can use Greg Casanza’s (Microsoft) SCVMM Windows PE driver injection script, which will add the drivers to the WinPE Image (Boot.wim) and will publish this new boot.wim to all your WDS servers. I also rewrote the script I got from using drivers in the VMM Library to use drivers from a folder.

Update SCVMM WinPE

This will add the drivers to the Boot.wim file and publish it to the WDS servers.

Update WDS Server

After this is done the Boot.wim will work with your new drivers.

 

 

 

 

 



CLIXML Export Import

Save PowerShell Object to file for Remote Troubleshooting

This is not something new to the most of you PowerShell guys out there, but still there are a lot of IT Pros which do not know about this. Sometimes we have to do some remote troubleshooting without having access to the system itself. The thing you can do is to let the customer send you some screenshots but that doesn’t really show everything and maybe you have to contact the customer like 100 times to get the right information. A better solution is to let the customer to run a PowerShell command or script and send you the output. But even a text file or screenshot of the PowerShell output is not the best solution. If you get a lot of text in a TXT file it is hard to sort it and maybe there are some information missing because the txt output does not include all information of the PowerShell object.

I have started to use a simple method to export PowerShell objects to a XML file and import the object on another system. This can be done by the PowerShell cmdlets Export-Clixml and Import-Clixml.

What I do is, I tell the customer to run the following command to generate a XML with the PowerShell objects about his disks for example.

After I got this XML file, I can import it here on my local system and can work with it as I would be in front of the customer system.

CLIXML Export Import

As I said, this is nothing new but this can save you and your customer some time. Of course this works with other objects not just disks 😉 For example you can get Cluster Configurations, Hyper-V Virtual Switch Configurations and much more.