Category: Private Cloud

Last updated by at .

CDN Consistent Device Naming

Cisco UCS supports Consistent Device Naming (CDN)

Yesterday I posted about Cisco UCS supporting RDMA (SMB Direct) with firmware version 2.2(4b)B. Walter Dey, former Cisco Distinguished Engineer at Cisco informed me not only about the RDMA feature he also showed me that Cisco UCS now supports Consistent Device Naming which was introduced with Windows Server 2012. Consistent Device Naming (CDN) allows Ethernet interfaces to be named in a consistent manner. This makes Ethernet interface names more persistent when adapter or other configuration changes are made. To use CDN in Cisco UCS you need to run firmware version 2.2(4b)B. This will help to make it a lot easier to identify network interfaces used with Windows Server 2012 R2 and Hyper-V.



Cisco UCS Hardware

Cisco UCS supports RoCE for Microsoft SMB Direct

As you may know we use SMB as the storage protocol for several Hyper-V deployments using Scale-Out File Server and Storage Spaces which adds a lot value to your Hyper-V deployments. To boost performance Microsoft is using RDMA or SMB Direct to accelerate Storage network performance.

RDMA over Converged Ethernet (RoCE) allows direct memory access over an Ethernet network. RoCE is a link layer protocol, and hence, it allows communication between any two hosts in the same Ethernet broadcast domain. RoCE delivers superior performance compared to traditional network socket implementations because of lower latency, lower CPU utilization and higher utilization of network bandwidth. Windows Server 2012 and later versions use RDMA for accelerating and improving the performance of SMB file sharing traffic and Live Migration. If you need to know more about RDMA or SMB Direct checkout my blog post: Hyper-V over SMB: SMB Direct

With Cisco UCS Manager Release 2.2(4), Cisco finally supports RoCE for SMB Direct. It sends additional configuration information to the adapter while creating or modifying an Ethernet adapter policy.

Guidelines and Limitations for SMB Direct with RoCE

  • SMB Direct with RoCE is supported only on Windows Server 2012 R2.
  • SMB Direct with RoCE is supported only with Cisco UCS VIC 1340 and 1380 adapters.
  • Cisco UCS Manager does not support more than 4 RoCE-enabled vNICs per adapter.
  • Cisco UCS Manager does not support RoCE with NVGRE, VXLAN, NetFlow, VMQ, or usNIC.
  • You can not use Windows Server NIC Teaming together with RMDA enabled adapters in Windows Server 2012 and Windows Server 2012 R2 or you will lose RDMA feature on these adapters.
  • Maximum number of queue pairs per adapter is 8192.
  • Maximum number of memory regions per adapter is 524288.
  • If you do not disable RoCE before downgrading Cisco UCS Manager from Release 2.2(4), downgrade will fail.

Checkout my post about Hyper-V over SMB:



Cisco UCS C200 M2 with Windows Server 2008 R2 and Windows Server 8 #HyperV

Cisco UCS and Hyper-V Enable Stateless Offloads with NVGRE

As I already mentioned I did several Hyper-V and Microsoft Windows Server projects with Cisco UCS. With Cisco UCS you can now configure stateless offloads for NVGRE traffic which is needed for Hyper-V Network Virtualization.

Cisco UCS Manager supports stateless offloads with NVGRE only with Cisco UCS VIC 1340 and/or Cisco UCS VIC 1380 adapters that are installed on servers running Windows Server 2012 R2 operating systems.

To use this you have to create Ethernet Adapter Policy, and set the Configuring an Ethernet Adapter Policy to Enable Stateless Offloads with NVGREin the Resources area:

  • Transmit Queues = 1
  • Receive Queues = n (up to 8)
  • Completion Queues = # of Transmit Queues + # of Receive Queues
  • Interrupts = # Completion Queues + 2

And in the Option area set the following settings:

  • Network Virtualization using Generic Routing Encapsulation = Enabled
  • Interrupt Mode = Msi-X

Make also sure you have installed eNIC driver Version 3.0.0.8 or later.

For more information, see http:/​/​www.cisco.com/​c/​en/​us/​td/​docs/​unified_computing/​ucs/​sw/​vic_drivers/​install/​Windows/​b_​Cisco_​VIC_​Drivers_​for_​Windows_​Installation_​Guide.html.



Windows Server 2016 Whats new in Hyper-V

What’s new in Windows Server 2016 Hyper-V

Back in October Microsoft released the first public Windows Server Technical Preview for the next release of Windows Server. At TechEd Europe Ben Armstrong, Principal Program Manager on the Hyper-V team at Microsoft, talked about a couple of new features which are coming in the next release of Windows Server Hyper-V. Microsoft this week officially announced Windows Server 2016, System Center 2016 and Hyper-V Server 2016 along with some other great products and releases at Microsoft Ignite. This here is a quick overview about the new features in Windows Server 2016 Hyper-V and this is still not a complete list with all the features coming to the next version.

Nano Server

Nano Server

Microsoft announced Nano Server, and the next Cloud Platform Server will you allow to run Hyper-V and of course also inside Hyper-V Virtual Machines.

Windows and Hyper-V Containers

Hyper-V Windows Containers

ReFS Accelerated VHDX Operations

If you create a fixed size VHDX file in Windows Server 2012 R2 and older this could take awhile since it had to write down all the zero blocks to the storage. If you had ODX you could leverage the storage do to this for you. In Windows Server 2016 if you create a fixed size VHDX on a ReFS volume, you can leverage ReFS and let the work instead, which allows you to create a fixed VHDX instantly. This also works with merging VHDX files, which is again great for backup operations and checkpoints.

Nested Virtualization

With Windows Server 2016 Hyper-V Microsoft allows you to run Hyper-V Servers inside Hyper-V Virtual Machines. This is a absolutely great feature if you want to train people or want to demo Hyper-V to others. You can check out my blog posted about Nested Hyper-V if you want to know more about it.

New Shared VHDX format

Hyper-V Shared VHDX

With Windows Server 2012 R2, Microsoft introduced a new feature to Hyper-V called Shared VHDX, which allowed you to attach a Virtual Disk (VHDX) to multiple VMs to build guest clusters. But in Windows Server 2012 R2 had some missing features such as missing Host based Backup of Shared VHDX files or online resizing of Shared VDHX files. In Windows Server 2016 Hyper-V Microsoft is working hard to bring the missing pieces together:

  • Host Based Backup of Shared VHDX files
  • Online Resize of Shared VHDX
  • Some usability change in the UI
  • Shared VHDX files are now a new type of VHD called .vhds files.

Host Resource Protection

This feature coming from Microsoft Azure, protects your Hyper-V host resources from virtual machines which are trying to attack the fabric for example trough generating high CPU workloads and other attacks. Host Resource Protection dynamically identifies virtual machines that are not “playing well” and reduce their resource allocation.

Virtual TPM

Windows Server 2016 Hyper-V allows you to add a Virtual TPM chip to your Virtual Machine, which allows you encrypt your VM using Windows Server Bitlocker. This will be a great features to protect Virtual Machine content especially from admins and when you host Virtual Machines in the cloud.

Linux Secure Boot

With Windows Server 2012 R2 introduced Generation 2 Virtual Machines which allowed you to use the Secure Boot features for Virtual Machines running Windows 8 or Windows Server 2012 and higher. With Hyper-V Server 2016 Microsoft allows you to use Secure Boot for Linux Virtual Machines.

Shielded VMs

Hyper-V Shielded VMs

Shielded Virtual Machines can only run in fabrics that are designated as owners of that virtual machines. Shielded Virtual Machines will need to be encrypted by Bitlocker (or other solutions) in order to ensure that only the designated owners can run this virtual machine. This requires other parts of the Microsoft Windows Server stack such as the Host Guardian Service or/and TPM.

Better Stretched Hyper-V Cluster using built-in Storage Replica

With the next version of Windows Server 2016, Microsoft a new feature called Storage Replica which allows you to replicated Storage in Windows Server 2016. This can also be used for File Servers or for Virtual Machine Storage. You are now enable to replicated CSVs from within the same cluster or to another cluster using this Windows Server feature.

Virtual Machine Storage Resiliency

Hyper-V Storage Resiliency

With the Windows Server 2012 R2 and later, Microsoft Failover Clustering was more less designed for Application protection and has kind of a basic design for Hyper-V Virtual Machines. Which is most cases works great and does exactly what it should. But in some scenarios like “quick” network outages this could cause more troubles than it should have done. With Virtual Machine Storage Resiliency in Windows Server 2016 Hyper-V, Storage fabric outages no longer mean that virtual machines crash, Virtual machines are paused an resumed automatically in response to storage fabric problems. In Windows Server 2012 R2 if a VM lost storage connection over 60 seconds the VM crashed. In Windows Server 2016 Hyper-V the Virtual Machine gets paused until the storage comes back. If the storage does not come back within a time you can still let it failover.

Virtual Machine Cluster Resiliency

As the same as Storage Resiliency, Cluster Resiliency helps during quick node  or network failures. With Virtual Machine Cluster Resiliency for Windows Server 2016 Hyper-V, VMs continue to run even when a node falls out of the cluster membership. This brings resiliency to transient failures and nodes which often repeat this, are quarantined. So if a nodes falls out of the cluster, virtual machines running on this node are kept alive until the node joins back into the cluster, if the cluster node does not come back within four minutes, the virtual machines get failed over to another Hyper-V node. But if one node goes in quarantine too many times, Hyper-V and the Failover Cluster automatically live migrate the virtual machines from that node to another, when the node comes back.

PowerShell Direct

Hyper-V PowerShell Direct

This feature is totally awesome. In Windows Server 2012 R2 Hyper-V Microsoft allowed you to copy files into the virtual machines without having network connectivity using the VMBus. With Windows Server 2016 you can now use PowerShell Direct to run PowerShell command inside the Virtual Machine using the VMBus.

Virtual Machine Configuration Changes

Hyper-V vNext VM Configuration Files

In Windows Server 2016 Hyper-V, Microsoft will change the Virtual Machine configuration files. Today the Hyper-V VM configuration files had the xml file format. You were able to open the file and check and edit the virtual machine configuration inside that file, even it was never supported. By running more and more workloads virtual and in a dynamic cloud way, scale and performance gets even more critical. In the next version of Hyper-V Microsoft will change the VM configuration from the xml file to a binary file format. The new binary format brings more efficient performance at large scale. Microsoft also now includes a resilient logging for changes in the configuration files so this should protect virtual machines from corruption.

New file extensions:

  • .VMCX (Virtual Machine Configuration) – replaces the .xml file
  • .VMRS (Virtual Machine Runtime State) – replaces .bin and .vsv file

Production VM Checkpoints (Snapshots)

Hyper-V vNext Production CheckPoint

Virtual Machine Checkpoints or in older versions Virtual Machine Snapshots were a great solution to take a state of a virtual machine and save it, doing some changes and if something fails you could simply revert back to the time you took the checkpoint. This was not really supported to use in production, since a lot of applications couldn’t handle that process. Microsoft now changed that behavior now fully supports it in production environments. For this Production Checkpoints are now using VSS instead of the Saved State to create the checkpoint. This means if you are restoring a checkpoint this is just like restoring a system from a backup. For the user everything works as before and there is no difference in how you have to take the checkpoint. Production Checkpoints are enabled by default, but you can change back to the old behavior if you need to. But still using Checkpoints brings some other challenges, like the growing .avhdx file, which still apply.

Hyper-V Replica support for Hot Add of VHDX

Hyper-V Replica was one of the greatest new features in Windows Server 2012 Hyper-V. In Windows Server 2012 and Windows Server 2012 R2 Hyper-V, if you have hot added a VHDX file to a Virtual Machine, replication failed. In Hyper-V 2016 when you add a new virtual hard disk to a virtual machine that is being replicated, it is automatically added to the not-replicated set so replication continues to run and you can then online update this set with via PowerShell and the VM will automatically resynchronize and everything works as expected.

Hot add / remove of Virtual Machine Memory

Hyper-V vNext Runtime Memory Resize

In Windows Server 2012 R2 Hyper-V you could decrease the Minimum Memory and increase the Maximum Memory of a Virtual Machine using Dynamic Memory while the VM was running. In Windows Server 2016 Hyper-V, you can now increase and decrease the Memory assigned to virtual machines while they are running, even if they are using static memory.

Hot add / remove of virtual network adapters

Hyper-V vNext Hot Add and Remove Virtual Network Adapters

This was maybe the feature VMware fan boys all over the world have used against Hyper-V. However I didn’t really saw a lot of customers doing this, but it is great that you can now hot add and remove network adapters from Virtual Machines.

Virtual Network Adapter Identification

Hyper-V vNext Virtual Network Adapter Identification

For me more important than hot add or remove virtual network adapters is this feature. When dealing with automation you are always happy you can identify different network adapters. For the Hyper-V hosts we have different solutions such as Consistent Device Naming (CDN), sort by PCI slot using PowerShell and other options to identify network adapters. But we didn’t really have a great solution for Virtual Machines. With Network Adapter Identification this changes. You can name individual virtual network adapters in the virtual machine settings and see the same name inside the guest virtual machine.

PowerShell on the Hyper-V Host

PowerShell in the guest

Hyper-V Manager Improvements

Finally, this is something which is not a problem in most environments , since we know how things work. But a lot of people which are Hyper-V beginners coming from VMware or other platforms, they have some simple troubles with Hyper-V Manager. In the next version there are a couple of create improvements which make things a lot easier.

  • Hyper-V Manager is now connecting via WinRM instead of WMI
  • Support for alternate credentials (Requires that you have CredSSP enabled on the server and client)
  • Connected to Hyper-V Hosts via IP address
  • Mange Windows Server 2012 Hyper-V, Windows Server 2012 R2 Hyper-V and the next version of Hyper-V from the latest console

Power Management improvements

SleepStudy Report Connected Standby Transitions

Microsoft updated the hypervisor power management model to support new modes of power management. And this is one of the reasons I run Windows 10 Technical Preview on my Surface Pro 3. Surface Pro 3 is a device which can run Connected Standby, but if you install Hyper-V on Windows 8.1 Connected Standby stops working. In the next version of Hyper-V Connected Standby will work.

Rolling Cluster Upgrade

Hyper-V vNext Rolling Cluster Upgrades

With this new feature you are finally able to upgrade a Hyper-V Cluster from Windows Server 2012 R2 Hyper-V to the next version of Hyper-V without new hardware, no downtime and the ability to roll-back safely if needed. In Windows Server 2012 R2 you had to create a new Hyper-V Cluster while the old Hyper-V Cluster was still running and migrate a Hyper-V Cluster via Cluster Migration Wizard or Live Migration. You can now have Windows Server 2012 R2 Hyper-V Hosts and the next version of Hyper-V running in the same Hyper-V Cluster. To make this scenario possible, the Hyper-V team had to do some changes to the Virtual Machine Upgrade Process

New Virtual Machine Upgrade Process

Hyper-V vNext Update VM Configuration Version

To support Rolling Cluster Upgrades Microsoft had to make some changes to the Virtual Machine Upgrade Process. In the current versions of Hyper-V, Virtual Machines were automatically upgraded from the old to the new version, which means that if you once moved a Virtual Machine to a new Hyper-V host you couldn’t move it back again. In a mixed cluster environment this does not work. In the next version of Hyper-V, Virtual Machines will not be upgraded automatically. Upgrading a virtual machines is a manual operation that is separate from upgrading the Hyper-V host. This allows you to move virtual machines back to earlier version of Hyper-V until they have been manually upgraded.

New way how VM Drivers (integration services) get updated

Since Windows Server 2012 R2 Hyper-V, VM drivers (integration services) were updated with each new host release, and it was required that the VM driver version matches the host version. When new Hyper-V integration services were shipped you had to update the Hyper-V host and form there you could upgrade the VM drivers inside the virtual machine. With Windows Server 2016 Hyper-V Microsoft brings VM driver updates over Windows Update. This means also that you now don’t have to have the VM integration services matching the host version, you simply need the latest version of the integration services released.

Secure Boot Support for Linux

Microsoft is pushing hard to bring more and more supported for Linux operating systems such as dynamic memory and other features. With Hyper-V vNext Microsoft bring Secure Boot support for Linux which works with Ubuntu 14.04 (and later) and SUSE Linux Enterprise Server 12.

PowerShell to enable Secure Boot Support for Linux:

Distributed Storage QoS

Hyper-V vNext Storage QoS

In Windows Server 2012 R2 Hyper-V we got the possibility to limit maximum IOPs for an individual virtual hard disk which was a great feature. Everything worked great when you were running the Virtual Machine on a single Hyper-V host, but when you were running multiple Hyper-V hosts with multiple Virtual Machine against the same storage, the Hyper-V host didn’t know that he had to compete with other servers for Storage IOPs or bandwidth. For example the scenario of a minimum IOPs setting did only work on standalone Hyper-V servers. With the next release of Hyper-V and Windows Server Microsoft adds a lot of new stuff. Together with the Scale-Out File Server and Storage Spaces, Microsoft now allows you to define IOPs reservation for important virtual hard disks and a IOPs reserve and limit that is shared by a group of virtual machines / virtual hard disks. This intelligence, build by Microsoft Research, enables a couple of interesting scenarios especially in service provider environments and large scale enterprises.

Virtual Machine Compute Resiliency

Hyper-V vNext Compute VM Reciliency

Microsoft invested heavily into VM resiliency, especially to hardware failure. One of them is the VM Compute Resiliency feature. This feature allows Virtual Machines to run on a host even if the cluster node is not available to the other nodes in the cluster. For example in Windows Server 2012 R2, if the cluster service couldn’t reach the node in the cluster for 30 seconds, the cluster would failover all the virtual machines to another node. If the same things happens in Windows Server vNext Hyper-V, the node would go into isolated mode for the next 4 minutes (default setting) and when the node comes back in four minutes all the virtual machines will still be running. If it doesn’t come back within four minutes the VMs will failover to another node. If a node is flapping from Isolated Mode to running the cluster service will set the node to quarantined and will move all the virtual machines from the node to another node. This should help keep your workloads running even if there are some hardware or network failures.

Evolving Hyper-V Backup

If you are working in IT you know that Backup is always a issues. And things didn’t really get better by running Virtual Machines running on Storage Systems. With the next release of Hyper-V Server Microsoft will release a completely new architecture to improve reliability, scale and performance of Virtual Machine backups. There are three big changes in the backup architecture:

  • Decoupling backing up virtual machines from backing up the underlying storage.
  • No longer dependent on hardware snapshots for core backup functionality, but still able to take advantage of hardware capabilities when they are present.
  • Built in change tracking for Backup of Virtual Machines

RemoteFX

Microsoft also did some improvements on Windows Server 2016 RemoteFX which now includes support for OpenGL 4.4 and OpenCL 1.1 API. It also allows you to use larger dedicated VRAM and VRAM in now finally configurable.

Hyper-V Cluster Management

This is maybe something you will never use by yourself but there is another great improvements in terms of automation and development. If you have ever used WMI against a Hyper-V Cluster you always had to run it against every Hyper-V Host in the cluster to get all the information. In the next version of Hyper-V you can finally run WMI against Hyper-V Cluster and it will handle it as it would be a single Hyper-V host, so you get all the information from all hosts in the cluster.

This was a quick overview over just some of the feature and improvements which are coming in the next release of Windows Server 2016 Hyper-V which will be released in 2016. There will be much more coming until Microsoft officially releases the next version of Hyper-V and of course some of the stuff I wrote about will be improved as well.

If you want to know more about the next version of Hyper-V checkout Ben Armstrong’s TechEd Europe session or visit some of our TechNet events.

 



Hyper-V

Hyper-V vNext is going to support nested Virtualization

I already wrote a blog post about some of the new features which are coming with the next version of Hyper-V. This week was the Microsoft Build Conference where Microsoft was talking a lot about new stuff for developers and the future of Windows, Azure, Office and so on. Now today I found a very interesting email in my inbox from Ronald Beekelaar (Microsoft MVP for Hyper-V) and he had the chance to visit a session at Build Conference where Taylor Brown and Mathew John were talking about Windows Containers: What, Why and How. In this session there was a quick side note, that Windows Server vNext Hyper-V will support nested Virtualization.

Until today a Hyper-V server could only run Virtual Machines when he was running on physical hardware. This was no problem in production, but when you wanted to do some demos or training you needed a lot of hardware to show what is possible with Hyper-V. Now with nested Virtualization you can run Hyper-V inside a virtual machine and build for example a demo and lab environment on your notebook, creating Hyper-V Clusters and so on.

As for some of you this might be not a big deal, this is a huge deal for everyone who did demos or training on Hyper-V.

 

You can watch the session here on Microsoft Channel9.



System Center 2012 R2 Update Rollup 6

Summary: Update Rollup 6 for System Center 2012 R2 and Azure Pack now available

Microsoft just released System Center 2012 R2 Update Rollup 6, which includes a lot of new features and fixes. With this Update Rollup 6 Microsoft now supports SQL Server 2014 as databases for the System Center and Windows Azure Pack components.

Components that are fixed in this update rollup

  • Data Protection Manager (KB3030574)
    • Option to keep online backup data while deleting a protection group
    • Support for SQL Server 2014 as DPMDB
  • Operations Manager (KB3051169)
  • Service Manager (KB3039363)
  • Service Provider Foundation (KB3050307)
  • Service Reporting (KB3050321)
  • Virtual Machine Manager (KB3050317)
    • Add Azure Subscription feature
    • Improved E2A ASR protection scenario
    • Option to use Generation 2 VMs in Services and VMRoles
    • Total Networking Usage Exposure rules in Management Pack
    • Option to overcommit Cloud and Host Group capacity for Replica VMs
  • Windows Azure Pack (KB3051166)
    • Adds support for Webjobs in Windows Azure Pack Websites.
    • Adds support for Deployment Slots in Windows Azure Pack Websites.
    • Adds support for Virtual Machine Checkpoint.
    • Adds support to maintain Data Consistency between the SQL Resource Provider configured properties for resources with the actual provisioned resources on the SQL Server Hosting server.
    • Compatibility with the next version of Windows Server
    • Fixes several SQL Server Resource Provider issues
  • Windows Azure Pack Web Sites (KB3051142)

I already posted some information on the Update Rollup 6 for Windows Azure Pack and System Center Virtual Machine Manager. I also want to highlight a change which was made in several different components in System Center and Windows Azure Pack. With UR6 you are now able to track Total Networking Usage Exposure rules in Management Pack. This change introduces two rules that target Hyper-V Hosts:

  • Total Incoming VNic Network traffic collection rule
  • Total Outgoing VNic Network traffic collection rule

These rules measure the total incoming and total outgoing traffic in Kilo Bytes per VNic per virtual machine in the following method:

For each VM:

  1. Enable Hyper-V Metering if it is not enabled.
  2. Run Measure-VM.
  3. Collect metering data for every remote address of “0.0.0.0/0″ or “::/0″ per VNic.

By default, these rules run every hour. Users may opt to override this setting by overriding the IntervalSeconds property. These rules should not be run more frequently than every five minutes (300 seconds).

Behavior in previous versions: VMM did not measure data consumption. It measured only throughput.



Windows Azure Pack Archtiecture Overview

What’s new Windows Azure Pack Update Rollup 6

Microsoft just released Update Rollup 6 for Windows Azure Pack on April 28. Microsoft fixes some bugs and added some highly requested features from User Voice as well.

  • Tenants can now create a checkpoint of a Virtual Machine and restore it at will when needed.
  • VMM Users can now deploy and manage Generation 2 VMs through VM Roles using WAP and the corresponding UR6 SPF Resource Provider
  • Added support to maintain Data Consistency between the SQL Resource Provider configured properties for resources with the actual provisioned resources on the SQL Server Hosting machine(s).
  • Added support for Webjobs in Windows Azure Pack Websites. This functionality offers creation of Webjobs to be executed manually or continuously in the background.
  • Tenants can now use deployment slots associated to their websites. Web app content and configurations elements can be swapped between two deployment slots, including the production slot.
  • Administrator can take advantage of DSC to deploy the update across a distributed environment.
  • Windows Azure Pack Websites can now take advantage of the HttpPlatformHandler to host Java and other runtimes.
  • Updates to Management Pack
    • Synthetic Transactions
    • Resource Governor Error Monitors
    • Monitor Certificate Validation Disabled
  • High Priority Bug Fixes