Tag: Nested Virtualization

Last updated by at .

Hyper-V Enhanced Session Mode

10 hidden Hyper-V features you should know about!

Microsoft added some amazing new features and improvements to Hyper-V over the past few years. A lot of them you can use in Windows Server 2016 Hyper-V today, but there are also a lot of features hidden in the user interface and they are also included in Windows 10 Pro or Enterprise. I think this list should you a good idea about some of them.

Nested Virtualization

Hyper-V Nested Virtualization

Hyper-V Nested Virtualization allows you to run Hyper-V in a Hyper-V Virtual Machine. This is great for testing, demo and training scenarios and it work on Windows Server 2016 and Windows 10 Pro and Enterprise. Microsoft Azure will also offer some new Virtual Machine which will offer the Nested Virtualization feature in the Azure public cloud. Nested Virtualization is not just great if you want to run virtual machines inside a virtual machine, it is also great (and I think this will be the largest use case in the future) you can also run Hyper-V Container inside a Hyper-V or Azure Virtual Machine. Hyper-V Containers are a feature will brings the isolation of a Virtual Machine to a fast, light and small footprint container. To enable Nested Virtualization you have the following requirements:

  • At least 4 GB RAM available for the virtualized Hyper-V host.
  • To run at least Windows Server 2016 or Windows 10 build 10565 (and higher) on both the physical Hyper-V host and the virtualized host. Running the same build in both the physical and virtualized environments generally improves performance.
  • A processor with Intel VT-x (nested virtualization is available only for Intel processors at this time).
  • Other Hypervisors will not work

Configure the Virtual Machine for Nested Virtualization follow the following steps:

  • disable Dynamic Memory on Virtual Machine
  • enable Virtualization Extensions on the vCPU
  • enable MAC Address Spoofing
  • set Memory of the Virtual Machine to a minimum of 4GB RAM

To enable the Virtualization Extensions on the vCPU you can run the following PowerShell command

PowerShell Direct

PowerShell Direct Enter-PSSession

Hyper-V PowerShell Direct is also one of the great new features in Windows 10 and Windows Server 2016 Hyper-V. PowerShell Direct allows you to connect to a Virtual Machine using PowerShell without connecting over the network. Instead of the network, PowerShell Direct uses the Hyper-V VMBus to connect from the Hyper-V host to the virtual machine. This is handy if you are doing some automation or you don’t have network access to the virtual machine. In terms of security, you will still need to provide credentials to access the virtual machine.

To use PowerShell Direct you have the following requirements:

  • The virtual machine must be running locally on the Hyper-V host and must be started.
  • You must be logged into the host computer as a Hyper-V administrator.
  • You must supply valid user credentials for the virtual machine.
  • The host operating system must run Windows 10, Windows Server 2016, or a higher version.
  • The virtual machine must run Windows 10, Windows Server 2016, or a higher version.

To use PowerShell Direct just use the Enter-PSSession or Invoke-Command cmdlets with the -VMName, -VMId or VM parameter.

Hyper-V Virtual Switch using NAT

Hyper-V Virtual Switch NAT Configuration

If you are running Hyper-V on your workstation, laptop you know that networking could have been kind of a problem. With the Hyper-V Virtual Switch using NAT, you can now create an internal network for your virtual machines and still allow them to for example have internet access, like you would run your virtual machines behind a router. To use this feature you have the following requirements:

  • Windows 10 and Windows Server 2016 build 14295 or later
  • Enabled Hyper-V role

To enable you can first create an internal switch using PowerShell, the the IP Address on the Virtual NIC on the Management OS and then set the NAT configuration:

To create NAT forwarding rules you can for example use the following command:

Virtual Battery for Virtual Machines

Hyper-V VM battery

With the Windows 10 Insider Build XXXX and later with the release of the Windows 10 Fall Creators Update, Microsoft enabled a Virtual Battery feature for Hyper-V Virtual Machines. This will allow Hyper-V VMs to see the battery status of the host. This is great when you are running Hyper-V on a notebook or if you have a SUV battery on your server

Hyper-V VMConnect – Enhanced Session Mode

Hyper-V Enhanced Session Mode

Interacting with Virtual Machines can be difficult and time consuming using the default VM console, since you can not copy paste or connect devices. VMConnect lets you use a computer’s local resources in a virtual machine, like a removable USB flash drive or a printer and in addition to this, Enhanced session mode also lets you resize the VMConnect window and use copy paste. This makes it almost as if you would use the Remote Desktop Client to connect to the Virtual Machine, without a network connection, instead you will make use of the VMBus.

The Enhanced Session Mode feature was introduced with Windows Server 2012 R2 and Windows 8.1. Enhanced session mode basically provides your Virtual Machine Connection with RDP (Remote Desktop Protocol) capabilities over the Hyper-V VMBus, including the following:

  • Display Configuration
  • Audio redirection
  • Printer redirection
  • Full clipboard support (improved over limited prior-generation clipboard support)
  • Smart Card support
  • USB Device redirection
  • Drive redirection
  • Redirection for supported Plug and Play devices

Requirements for the Enhanced Session Mode are:

  • The Hyper-V host must have Enhanced session mode policy and Enhanced session mode settings turned on
  • The computer on which you use VMConnect must run Windows 10, Windows 8.1, Windows Server 2016, or Windows Server 2012 R2 or higher
  • The virtual machine must have Remote Desktop Services enabled, and run Windows 8.1 (or higher) and Windows Server 2012 R2 (or higher) as the guest operating system.

You can simply use it, by pressing the enhanced session button (if you have all the requirementsOn the Windows 10 Client this is enabled by default on the “host”. On Windows Server you have to enable it first in the Hyper-V Manager under Hyper-V Settings

Hyper-V Manager Zoom Level

Hyper-V VMConnect Zoom Level

In the Windows 10 Creators Update, Microsoft introduced a new feature to the VMConnect Console. This feature allows you to control the zoom level of the Virtual Machine console, this is especially handy if you have a high DPI screen.

Virtual TPM Chip

Hyper-V Virtual TPM

If you are running Windows 10 or Windows Server 2016 or higher you can make use of a feature called Shielded Virtual Machines. This allows you to protect your virtual machines form being accessed from the outside. With this feature Microsoft added different levels of security enhancements. One of them is the possibility to add a Virtual TPM chip to the virtual machine. With that enabled you can use BitLocker or another encryption technology to encrypt your virtual machine disks from inside the VM.

Enable Hyper-V vTPM PowerShell

You can enable the Virtual TPM chip using the Hyper-V Manager or PowerShell. The virtual machine needs to be shut down.

Just to make sure, if you really need full protection, have a look at Shielded Virtual Machines with the Host Guardian Service (HGS).

VM Resource Metering

Hyper-V VM Resource Metering

With Windows Server 2012 Hyper-V Microsoft introduced a new feature in Hyper-V called VM Resource Metering which allows you to measure the usage of a virtual machine. This allows you to track CPU, Memory, Disk and network usage. This is a great feature especially if you need to do charge back or maybe even for trouble shooting.

You can enable VM Resource Metering using PowerShell

To measure the virtual machine, you can used the following command

Export and Share Hyper-V Virtual Machines

Export and Share Hyper-V Virtual Machine

Another feature a lot of people do not know about is that you can export Hyper-V Virtual Machines to copy them to another computer or server. The great thing about this, this can even be done while the virtual machine is running and you can even export the state of the virtual machine with it. You can use the UI to do this, or you just run PowerShell using the Export-VM cmdlet.

In the Windows 10 Fall Creators Update Microsoft also added a button to shared the Virtual Machine. This does not only export the virtual machine but it also create a compressed VM Export File (.vmcz).

Hyper-V Containers

Hyper-V Windows Containers

In Windows 10 and Windows Server 2016 you can run Windows Containers using Docker. While on Windows Server you can choose between running a Windows Container or a Hyper-V Container, you will always run a Hyper-V Container on Windows 10. While Hyper-V Containers and Windows Containers are fully compatible with each other, what means you can start a Windows Container in a Hyper-V Container runtime and the other way around, the Hyper-V Container gives you an extra layer of isolation between your containers and your operating system. This makes running containers not just much more secure but since the Windows 10 Fall Creators Update and Windows Server RS3 (Redstone 3), it will also allow you to run Linux Containers on a Windows Container Host, which will make Windows the best platform to run Windows Containers and Linux Containers side by side.

I hope this short list was helpful and showed you some features you didn’t know were there in Hyper-V. Some of these features are still in preview and are might not available in production versions of Hyper-V. Leave your favorite secret Hyper-V features in the comments!



Azure Nested Virtualization

How to setup Nested Virtualization in Microsoft Azure

At the Microsoft Build Conference this year, Microsoft announced Nested Virtualization for Azure Virtual Machines, and last week Microsoft announced the availability of these Azure VMs, which support Nested Virtualization. Nested Virtualization basically allows you to run a Hypervisor in side a Virtual Machine running on a Hypervisor, which means you can run Hyper-V within a Hyper-V Virtual Machine or within a Azure Virtual Machine, kind a like Inception for Virtual Machines.

Azure Nested Virtualization

You can use Nested Virtualization since Windows Server 2016 or the same release of Windows 10, for more details on this, check out my blog post: Nested Virtualization in Windows Server 2016 and Windows 10

With the release of the Azure Dv3 and Ev3 VM sizes:

  • D2-64 v3 instances are the latest generation of General Purpose Instances. D2-64 v3 instances are based on the 2.3 GHz Intel XEON ® E5-2673 v4 (Broadwell) processor and can achieve 3.5GHz with Intel Turbo Boost Technology 2.0. D2-64 v3 instances offer the combination of CPU, memory, and local disk for most production workloads.
  • E2-64 v3 instances are the latest generation of Memory Optimized Instances. E2-64 v3 instances are based on the 2.3 GHz Intel XEON ® E5-2673 v4 (Broadwell) processor and can achieve 3.5GHz with Intel Turbo Boost Technology 2.0. E2-64 v3 instances are ideal for memory-intensive enterprise applications.

With the upgrade to new Intel Broadwell processors, Microsoft enabled Nested Virtualization, which will allows a couple of different scenarios, when you create a Virtual Machine running Windows Server 2016.

  • You can run Hyper-V Containers (Windows Containers with additional isolation) inside an Azure VM. With future releases we will also be able to run Linux Containers in Hyper-V Containers running on a Windows Server OS.
  • You can quickly spin up and shut down new demo and test environments, and you only pay when you use them (pas-per-use)

How to Setup Nested Virtualization in Azure

Deploy Azure VM

To setup Nested Virtualization inside an Azure Virtual Machine, you first need to create a new Virtual Machines using one of the new instance sizes like Ev3 or Dv3 and Windows Server 2016.I also recommend to install all the latest Windows Server patches to the system.

Optional: Optimize Azure VM Storage

This step is optional, but if you want to better performance and more storage for your Nested Virtual Machines to run on, this makes sense.

Azure VM Data Disks

In my case I attached 2 additional data disks to the Azure VM. Of course you can choose more or different sizes. Now you can see 2 new data disk inside your Azure Virtual Machine. Do not format them, because we gonna create a new storage spaces pool and a simple virtual disk, so we get the performance form both disks at the same time. In the past this was called disk striping.

Azure VM Storage Spaces

With that you can create a new Storage Spaces Storage Pool and a new Virtual Disk inside the VM using the storage layout “Simple” which basically configures it as striping.

Azure VM Storage Spaces PowerShell

I also formatted the disk and set the drive letter to V:, this will be the volume where I will place my nested virtual machines.

Install Hyper-V inside the Azure VM

Install Hyper-V on Windows Server using PowerShell

The next step would be to install the Hyper-V role in your Azure Virtual Machine. You can use PowerShell to do this since this is a regular Windows Server 2016.This command will install Hyper-V and restart the virtual machine.

Azure VM Hyper-V

After the installation you have Hyper-V installed and enabled inside your Azure Virtual Machine, now you need to configure the networking for the Hyper-V virtual machines. For this we will use NAT networking.

Configure Networking for the Nested Environment

Hyper-V NAT Network inside Azure VM

To allow the nested virtual machine to access the internet, we need to setup Hyper-V networking in the right why. For this we use the Hyper-V internal VM Switch and NAT networking. I described this here: Set up a Hyper-V Virtual Switch using a NAT Network

Create a new Hyper-V Virtual Switch

First create a internal Hyper-V VM Switch

Configure the NAT Gateway IP Address

The Internal Hyper-V VM Switch creates a virtual network adapter on the host (Azure Virtual Machine), this network adapter will be used for the NAT Gateway. Configure the NAT gateway IP Address using New-NetIPAddress cmdlet.

Configure the NAT rule

After that you have finally created your NAT network and you can now use that network to connect your virtual machines and use IP Address from 172.21.21.2-172.21.21.254.

Now you can use these IP Addresses to assign this to the nested virtual machines. You can also setup a DHCP server in one of the nested VMs to assign IP addresses automatically to new VMs.

Optional: Create NAT forwards inside Nested Virtual Machines

To forward specific ports from the Host to the guest VMs you can use the following commands.

This example creates a mapping between port 80 of the host to port 80 of a Virtual Machine with an IP address of 172.21.21.2.

This example creates a mapping between port 82 of the Virtual Machine host to port 80 of a Virtual Machine with an IP address of 172.21.21.3.

Optional: Configure default Virtual Machine path

Since I have created an extra volume for my nested virtual machines, I configure this as the default path for Virtual Machines and Virtual Hard Disks.

Create Nested Virtual Machines inside the Azure VM

Azure Nested Virtualization

Now you can basically start to create Virtual Machines inside the Azure VM. You can for example use an existing VHD/VHDX or create a new VM using an ISO file as you would do on a hardware Hyper-V host.

Some crazy stuff to do

There is a lot more you could do, not all of it makes sense for everyone, but it could help in some cases.

  • Running Azure Stack Development Kit – Yes Microsoft released the Azure Stack Development Kit, you could use a large enough Azure virtual machine and run it in there.
  • Configure Hyper-V Replica and replicate Hyper-V VMs to your Azure VM running Hyper-V.
  • Nested a Nested Virtual Machine in a Azure VM – You could enable nesting on a VM running inside the Azure VM so you could do a VM inside a VM inside a VM. Just follow my blog post to created a nested Virtual Machine: Nested Virtualization in Windows Server 2016 and Windows 10

In my opinion Nested Virtualization is mostly help full if you run Hyper-V Containers, but it also works great, if you want to run some Virtual Machines inside a Azure VM, for example to run a lab or test something.



Azure Nested Virtualization

Hyper-V Container and Nested Virtualization in Microsoft Azure Virtual Machines

Last week Microsoft announced some pretty cool new Azure Stuff, like the Azure Cloud Shell, Azure PowerShell 4.0, Azure Cosmos DB and much more. In the session about Azure Compute, Microsoft introduced a bunch of new features, like new VM sizes, new experiences and new integration technology as well as updates to Azure Service Fabric, Azure Container Service and Azure Functions. One which really got my interest was the announcement about the new Virtual Machines sizes for Dv3 and Ev3, which will enable customers to use Virtualization inside their Windows Server Virtual Machines on Azure, enabled by Nested Virtualization from Windows Server 2016 Hyper-V. With that Dv3 and Ev3 Azure Virtual Machines are Nested Virtualization enabled. This means you can now run Nested Virtualization in Microsoft Azure Virtual Machines.

Update: The new Azure Dv3 and Ev3 VM sizes are now available, and you can now use Nested Virtualization in Azure.

Azure Nested Virtualization and Hyper-V Containers

You can now run Hyper-V in Azure Virtual Machines and even more important you can now run Hyper-V Container inside Azure Virtual Machines. With the announcements for Windows Server 2016 supporting Hyper-V Containers running Linux and Windows Server this is great news. You will be able to create Container Hosts in Azure running Windows Server and create Windows and Linux Containers on the same Container Host.

Azure VM Sizes

By the way, if you want to run Hyper-V Container in Azure today, and you don’t want to wait until the Dv3 and Ev3 series are available you can run them inside Azure Service Fabric. So yes, Microsoft now allows you to run Hyper-V Containers in Azure Service Fabric.

Azure Nested Virtualization Demo

As you could see in the demo, they are offering quite large Virtual Machines with a lot of RAM, running on Intels Xeon E7 CPUs.



Switch Windows Container to Hyper-V Container

Switch a Windows Server Container to a Hyper-V Container

With Technical Preview 4 of Windows Server 2016 made the new Hyper-V Containers available. With that you can now use Windows Server Container and Hyper-V Container. To run Hyper-V Containers you have to make sure, you have Hyper-V Nested Virtualization active for your Container Host VM.

If you create a new Container it will create a Windows Server Container by default, if you want to create a Hyper-V container you have to switch the RuntimeType to Hyper-V.

With the following command you can see which RuntimeType the Container has:

To change the runtime Type to Hyper-V Container you can use the following command:

So switch it back to a Windows Server Container you can use the following command:

 



Hyper-V Nested Virtualization

Nested Virtualization in Windows Server 2016 and Windows 10

I already wrote a blog post bout Nested Virtualization in Windows 10 some weeks ago. With Technical Preview 4 of Windows Server 2016 Microsoft also introduced Nested Virtualization in Windows Server Hyper-V. Nested Virtualization allows you to run a Hypervisor inside a Virtual Machine running on a Hypervisor. This is a great case for demo and lab environment and also if you want to run Virtual Hyper-V servers in Microsoft Azure IaaS Virtual Machines (we will see if Microsoft will support this in Azure in the future).

Requirements

  • At least 4 GB RAM available for the virtualized Hyper-V host.
  • To run at least Windows Server 2016 Technical Preview 4 or Windows 10 build 10565 on both the physical Hyper-V host and the virtualized host. Running the same build in both the physical and virtualized environments generally improves performance.
  • A processor with Intel VT-x (nested virtualization is available only for Intel processors at this time).
  • Other Hypervisors will not work

How to set it up

To enable Nested Virtualization in Hyper-V, Microsoft created a script you can use which I already documented in my first blog post about Nested Virtualization. But of course you can do this also manual doing the following steps:

  • disable Dynamic Memory on Virtual Machine
  • enable Virtualization Extensions on the vCPU
  • enable MAC Address Spoofing
  • set Memory of the Virtual Machine to a minimum of 4GB RAM

To set the Virtualization Extension for the vCPU you can use PowerShell:

Limitations

With Nested Virtualization there are coming some limitations:

  • Once nested virtualization is enabled in a virtual machine, the following features are no longer compatible with that VM.
    These actions will either fail, or cause the virtual machine not to start if it is hosting other virtual machines:

    • Dynamic memory must be OFF. This will prevent the VM from booting.
    • Runtime memory resize will fail.
    • Applying checkpoints to a running VM will fail.
    • Live migration will fail — in other words, a VM which hosts other VMs cannot be live migrated.
    • Save/restore will fail.
  • Hosts with Device Guard enabled cannot expose virtualization extensions to guests.
  • Hosts with Virtualization Based Security (VBS) enabled cannot expose virtualization extensions to guests. You must first disable VBS in order to preview nested virtualization.

For more information check out the Microsoft page about Hyper-V Nested Virtualization.

 

 

 

 



Hyper-V Nested Virtualization

Hyper-V Nested Virtualization in Windows 10 Build 10565

This week Microsoft released a new Windows 10 Insider Preview build to the Windows Insiders. It brings a couple of new features to the OS, but Ben Armstrong (Hyper-V Program Manager at Microsoft) mentions in a blog post that it also brings a preview of Nested Virtualization to Hyper-V in Windows 10. Nested Virtualization allows you to run Hyper-V inside a VM. This is prefect for Lab and Training scenarios, so you can run multiple Hyper-V server without the need of a lot of physical hardware.

So how can you enable Nested Virtualization in this early preview build? Theo Thompson describes this in a blog post:

Step 1: Create a VM

Step 2: Run the enablement script

Given the configuration requirements (e.g. dynamic memory must be off), we’ve tried to make things easier by providing a PowerShell script.

This script will check your configuration, change anything which is incorrect (with permission), and enable nested virtualization for a VM. Note that the VM must be off.

Step 3: Install Hyper-V in the guest

From here, you can install Hyper-V in the guest VM.

Step 4: Enable networking (optional)

Once nested virtualization is enabled in a VM, MAC spoofing must be enabled for networking to work in its guests. Run the following PowerShell (as administrator) on the host machine:

Step 5: Create nested VMs

This is still a very early preview and this means this feature still has a lot of know issues:

  • Both hypervisors need to be the latest versions of Hyper-V. Other hypervisors will not work. Windows Server 2012R2, or even builds prior to 10565 will not work.
  • Once nested virtualization is enabled in a VM, the following features are no longer compatible with that VM. These actions will either fail, or cause the VM not to start:
    • Dynamic memory must be OFF. This will prevent the VM from booting.
    • Runtime memory resize will fail.
    • Applying checkpoints to a running VM will fail.
    • Live migration will fail.
    • Save/restore will fail.
  • Once nested virtualization is enabled in a VM, MAC spoofing must be enabled for networking to work in its guests.
  • Hosts with Virtualization Based Security (VBS) enabled cannot expose virtualization extensions to guests. You must first disable VBS in order to preview nested virtualization.
  • This feature is currently Intel-only. Intel VT-x is required.
  • Beware: nested virtualization requires a good amount of memory. I managed to run a VM in a VM with 4 GB of host RAM, but things were tight.