Last updated by at .

  • What's new in Hyper-V 2016
  • Microsoft Azure

Tag: Virtualization

Containers PowerShell

First steps with Windows Containers

At Microsoft Ignite 2015 back in Chicago Microsoft announced Windows Containers. With the release of the Technical Preview 3 (TP3) for Windows Server 2016 we are finally able to start using Windows Containers, and we can finally test them. But first let use check a little what containers are.

The concept of containers is nothing new, in the Linux world containers are a well known concept. If you have a look at the Wikipedia description for Linux Containers, Wikipedia describes it as follows: LXC (Linux Containers) is an operating-system-level virtualization environment for running multiple isolated Linux systems (containers) on a single Linux control host. Containers provide operating system-level virtualization through a virtual environment that has its own process and network space, instead of creating a full-fledged virtual machine. With Windows Server 2016 more or less the same concept comes the Windows world. This makes containers much more light-weight, faster and less resource consuming than Virtual Machines, which makes it perfect for some scenarios, especially dev-test scenarios or for worker roles.

Container Ecosystem

If we have a look at the concept of containers you have several things in the container ecosystem:

Container Ecosystem

First you have the Container Run-Time which builds the boundaries between the different containers and the operating system. To make deployment easier, faster and more efficient you build Container Images which Include the application frameworks as well as the applications on top of the OS used for the container. To use, store and share Container Images you can use an Image Repository.

The question most people will ask is how are containers different than Virtual Machines etc.

Physical Server

Physical Host

At the beginning what we did is, we installed an operating system on physical hardware and in that operating system we installed applications directly.

Virtual Machines

Virtual Machines

With virtual machines we created simulated some virtual hardware on top of the operating system of the physical server. We installed an operating system inside the virtual machine on top of the virtual hardware and installed application inside the VM. In this case, each virtual machine has its own operating system.



With container we use an operating-system-level virtualization environment which create boundaries between different applications. This is so efficient you can run multiple applications side by side without effecting each other. Since this is operating-system-level virtualization you cannot only directly on the operating system on the physical hardware, you can also use operating-system-level virtualization inside a virtual machine. This is by the way the way I see most of the deployments of containers.

Windows Containers vs. Hyper-V Containers

Hyper-V Containers

Microsoft will provide two different types of Container Run-Times. One is Windows Containers and the other one will be Hyper-V Containers (not Hyper-V Virtual Machines). In some cases it is maybe not compliant that some applications share the same operating system. In this case Hyper-V Containers will add an extra boundaries of security. Hyper-V Containers are basically Windows Containers running in a Hyper-V Partition, so with that you gain all the stuff you get with Windows Containers but with another layer of isolation.The great thing here, is that both Container Run-Times use the exam same image format. This means if an image is created in a Windows Container Run-Time it also works as a Hyper-V Container and vice versa.

Hyper-V Containers Nested Virtualization

The other great side effect of Hyper-V Containers is, that in order to run Hyper-V Containers inside a Virtual Machine we need nested Virtualization, which will be included in Windows Server 2016 Hyper-V. Btw. Hyper-V Containers are not part of the Technical Preview 3.

(Pictures from the Microsoft Ignite 2015 presentation of Taylor Brown and Arno Mihm (Program Managers for Containers)

Deploy Windows Containers

With the release of the Technical Preview 3 of Windows Server 2016, Microsoft made Windows Containers available to the public. To get started you can download a install Windows Server 2016 inside a Virtual Machine or even bare-metal. If the virtual machine has internet connection you can use the following command to download the configuration script, which will prepare your container host.

Install Windows Container Host

After that you can run the C:\ContainerSetup.ps1 script, which will prepare your container host. This can take some time depending on your internet connection and hardware.

The VM will restart several times and if it is finished you can start using Windows Containers inside this Virtual Machine.

Managing Windows Containers

Containers PowerShell Module

After you have logged in to the Virtual Machine you can start managing Containers using PowerShell:

Containers PowerShell

Get Container Images, by default you will get a WindowsServerCore Image. You can also create your own images, based on this image.

Create a new Container

Start the container

Connect to the Container using Enter-PSSession

Of course you an also use the docker command to make your containers.

Windows Containers Docker

Deploy a Container Host in Microsoft Azure

If you don’t want to go trough all the installation process you can also use a Template in Microsoft Azure to deploy a new Container Host Virtual Machine.

Microsoft Azure Windows Server Container Preview

If you need some more information on Windows Containers check out the Microsoft Resources on MSDN about Windows Server Containers.


Nutanix Coding Challenge Total Recode

Judge at the Nutanix Coding Challenge

Nutanix just announced the PowerShell Coding Challenge for your Nutanix environment. The challenge will be to build a script which solves a real world problem in these use cases: Provisioning/orchestration, reporting, data protection, disaster recovery and runbook automation.

Do you have what it takes to write the best script for a Nutanix environment? Find out by participating in the inaugural Total Recode challenge. This global contest gives you a platform for showcasing your best talent. May the most creative, badass coding guru win!

Want to get more familiar with the Nutanix product and test your script? Check out the Nutanix Prism APIs and our recently announced Community Edition software.

Nutanix Coding Challenge Prizes

You can win great prices:

  • Best Overall
    DJ1 Inspire 1 Drone (Valued up
    to $4000) or $4000 cash prize
  • Most Impactful
    Home Lab ($2,500 value)
    or $2,500 cash prize
  • Most Creative
    $2000 cash prize

And I am proud the be a judge in this contest with other great minds:


Nutanix Coding Challenge Judges

If you want to know more or join the challenge, check out the Nutantix Coding Challenge: Total Recode website.

E2EVC Copenhagen

Speaking at E2EVC 2015 Berlin

Last year I was speaking at the Experts 2 Experts Virtualization Conference (E2EVC) at E2EVC Barcelona and E2EVC Brussels. And I am proud to announce that I will speak at E2EVC 2015 in Berlin next week from 12-14 June. Together with Michael Ruefli (Microsoft MVP for Cloud and Datacenter Management) I will speak about the latest announcement from the Microsoft Cloud and Datacenter evolution, covering topics like Azure Stack, Nano Server, Windows Server 2016, System Center 2016, Hyper-V and Microsoft Azure.

E2EVC Virtualization Conference is a non-commercial, virtualization community event. The main goal of the E2EVC is to bring the best virtualization experts together to exchange knowledge and to establish new connections. E2EVC is a weekend crammed with presentations, Master Classes and discussions delivered by both virtualization vendors product teams and independent experts. I am happy to be part of the community and listen to other industry leading experts, hopefully see you in Berlin.



Cisco UCS Hardware

Cisco UCS supports RoCE for Microsoft SMB Direct

As you may know we use SMB as the storage protocol for several Hyper-V deployments using Scale-Out File Server and Storage Spaces which adds a lot value to your Hyper-V deployments. To boost performance Microsoft is using RDMA or SMB Direct to accelerate Storage network performance.

RDMA over Converged Ethernet (RoCE) allows direct memory access over an Ethernet network. RoCE is a link layer protocol, and hence, it allows communication between any two hosts in the same Ethernet broadcast domain. RoCE delivers superior performance compared to traditional network socket implementations because of lower latency, lower CPU utilization and higher utilization of network bandwidth. Windows Server 2012 and later versions use RDMA for accelerating and improving the performance of SMB file sharing traffic and Live Migration. If you need to know more about RDMA or SMB Direct checkout my blog post: Hyper-V over SMB: SMB Direct

With Cisco UCS Manager Release 2.2(4), Cisco finally supports RoCE for SMB Direct. It sends additional configuration information to the adapter while creating or modifying an Ethernet adapter policy.

Guidelines and Limitations for SMB Direct with RoCE

  • SMB Direct with RoCE is supported only on Windows Server 2012 R2.
  • SMB Direct with RoCE is supported only with Cisco UCS VIC 1340 and 1380 adapters.
  • Cisco UCS Manager does not support more than 4 RoCE-enabled vNICs per adapter.
  • Cisco UCS Manager does not support RoCE with NVGRE, VXLAN, NetFlow, VMQ, or usNIC.
  • You can not use Windows Server NIC Teaming together with RMDA enabled adapters in Windows Server 2012 and Windows Server 2012 R2 or you will lose RDMA feature on these adapters.
  • Maximum number of queue pairs per adapter is 8192.
  • Maximum number of memory regions per adapter is 524288.
  • If you do not disable RoCE before downgrading Cisco UCS Manager from Release 2.2(4), downgrade will fail.

Checkout my post about Hyper-V over SMB:

Cisco UCS C200 M2 with Windows Server 2008 R2 and Windows Server 8 #HyperV

Cisco UCS and Hyper-V Enable Stateless Offloads with NVGRE

As I already mentioned I did several Hyper-V and Microsoft Windows Server projects with Cisco UCS. With Cisco UCS you can now configure stateless offloads for NVGRE traffic which is needed for Hyper-V Network Virtualization.

Cisco UCS Manager supports stateless offloads with NVGRE only with Cisco UCS VIC 1340 and/or Cisco UCS VIC 1380 adapters that are installed on servers running Windows Server 2012 R2 operating systems.

To use this you have to create Ethernet Adapter Policy, and set the Configuring an Ethernet Adapter Policy to Enable Stateless Offloads with NVGREin the Resources area:

  • Transmit Queues = 1
  • Receive Queues = n (up to 8)
  • Completion Queues = # of Transmit Queues + # of Receive Queues
  • Interrupts = # Completion Queues + 2

And in the Option area set the following settings:

  • Network Virtualization using Generic Routing Encapsulation = Enabled
  • Interrupt Mode = Msi-X

Make also sure you have installed eNIC driver Version or later.

For more information, see http:/​/​​c/​en/​us/​td/​docs/​unified_computing/​ucs/​sw/​vic_drivers/​install/​Windows/​b_​Cisco_​VIC_​Drivers_​for_​Windows_​Installation_​Guide.html.


Hyper-V vNext is going to support nested Virtualization

I already wrote a blog post about some of the new features which are coming with the next version of Hyper-V. This week was the Microsoft Build Conference where Microsoft was talking a lot about new stuff for developers and the future of Windows, Azure, Office and so on. Now today I found a very interesting email in my inbox from Ronald Beekelaar (Microsoft MVP for Hyper-V) and he had the chance to visit a session at Build Conference where Taylor Brown and Mathew John were talking about Windows Containers: What, Why and How. In this session there was a quick side note, that Windows Server vNext Hyper-V will support nested Virtualization.

Until today a Hyper-V server could only run Virtual Machines when he was running on physical hardware. This was no problem in production, but when you wanted to do some demos or training you needed a lot of hardware to show what is possible with Hyper-V. Now with nested Virtualization you can run Hyper-V inside a virtual machine and build for example a demo and lab environment on your notebook, creating Hyper-V Clusters and so on.

As for some of you this might be not a big deal, this is a huge deal for everyone who did demos or training on Hyper-V.


You can watch the session here on Microsoft Channel9.

NIC Teaming

Overview on Windows Server and Hyper-V 2012 R2 NIC Teaming and SMB Multichannel

I know this is nothing new but since I had to mention the Whitepaper on NIC Teaming and the use of SMB Multichannel as well as the configuration with System Center Virtual Machine Manager in a couple of meetings I want to make sure you have an overview on my blog.

NIC Teaming

Windows Server NIC Teaming was introduced in Windows Server 2012 (Codename Windows Server 8). NIC teaming, also known as Load Balancing/Failover (LBFO), allows multiple network adapters to be placed into a team for the purposes of bandwidth aggregation, and/or traffic failover to maintain connectivity in the event of a network component failure.

NIC Teaming Recommendation

For design the default and recommended configuration is using NIC Teaming with Switch Independent and Dynamic and in some scenarios where you have the write switches you can use LACP and Dynamic.

Download Windows Server 2012 R2 NIC Teaming (LBFO) Deployment and Management Whitepaper

This guide describes how to deploy and manage NIC Teaming with Windows Server 2012 R2.

You can find the Whitepaper on Windows Server 2012 R2 NIC Teaming (LBFO) Deployment and Management in the Microsoft Download Center.

SMB Multichannel

Hyper-V over SMB Multichannel

If you use Hyper-V over SMB you can use SMB Multichannel as a even better mode to distribute SMB 3.0 traffic across different network adapters or you could use a mix of both, NIC Teaming and SMB Multichannel. Check out my blog post about Hyper-V over SMB: SMB Multichannel, SMB Direct (RDMA) and Scale-Out File Server and Storage Spaces.

Configuration with System Center Virtual Machine Manager

Logical Switch

Some months back I also wrote some blog post about configuration of Hyper-V Converged Networking and System Center Virtual Machine Manager. This guide will help you to understand how you deploy NIC Teaming with System Center Virtual Machine Manager using the Logical Switch on Hyper-V hosts.