Tag: Virtualization

Last updated by at .

Hyper-V Enhanced Session Mode

10 hidden Hyper-V features you should know about!

Microsoft added some amazing new features and improvements to Hyper-V over the past few years. A lot of them you can use in Windows Server 2016 Hyper-V today, but there are also a lot of features hidden in the user interface and they are also included in Windows 10 Pro or Enterprise. I think this list should you a good idea about some of them.

Nested Virtualization

Hyper-V Nested Virtualization

Hyper-V Nested Virtualization allows you to run Hyper-V in a Hyper-V Virtual Machine. This is great for testing, demo and training scenarios and it work on Windows Server 2016 and Windows 10 Pro and Enterprise. Microsoft Azure will also offer some new Virtual Machine which will offer the Nested Virtualization feature in the Azure public cloud. Nested Virtualization is not just great if you want to run virtual machines inside a virtual machine, it is also great (and I think this will be the largest use case in the future) you can also run Hyper-V Container inside a Hyper-V or Azure Virtual Machine. Hyper-V Containers are a feature will brings the isolation of a Virtual Machine to a fast, light and small footprint container. To enable Nested Virtualization you have the following requirements:

  • At least 4 GB RAM available for the virtualized Hyper-V host.
  • To run at least Windows Server 2016 or Windows 10 build 10565 (and higher) on both the physical Hyper-V host and the virtualized host. Running the same build in both the physical and virtualized environments generally improves performance.
  • A processor with Intel VT-x (nested virtualization is available only for Intel processors at this time).
  • Other Hypervisors will not work

Configure the Virtual Machine for Nested Virtualization follow the following steps:

  • disable Dynamic Memory on Virtual Machine
  • enable Virtualization Extensions on the vCPU
  • enable MAC Address Spoofing
  • set Memory of the Virtual Machine to a minimum of 4GB RAM

To enable the Virtualization Extensions on the vCPU you can run the following PowerShell command

PowerShell Direct

PowerShell Direct Enter-PSSession

Hyper-V PowerShell Direct is also one of the great new features in Windows 10 and Windows Server 2016 Hyper-V. PowerShell Direct allows you to connect to a Virtual Machine using PowerShell without connecting over the network. Instead of the network, PowerShell Direct uses the Hyper-V VMBus to connect from the Hyper-V host to the virtual machine. This is handy if you are doing some automation or you don’t have network access to the virtual machine. In terms of security, you will still need to provide credentials to access the virtual machine.

To use PowerShell Direct you have the following requirements:

  • The virtual machine must be running locally on the Hyper-V host and must be started.
  • You must be logged into the host computer as a Hyper-V administrator.
  • You must supply valid user credentials for the virtual machine.
  • The host operating system must run Windows 10, Windows Server 2016, or a higher version.
  • The virtual machine must run Windows 10, Windows Server 2016, or a higher version.

To use PowerShell Direct just use the Enter-PSSession or Invoke-Command cmdlets with the -VMName, -VMId or VM parameter.

Hyper-V Virtual Switch using NAT

Hyper-V Virtual Switch NAT Configuration

If you are running Hyper-V on your workstation, laptop you know that networking could have been kind of a problem. With the Hyper-V Virtual Switch using NAT, you can now create an internal network for your virtual machines and still allow them to for example have internet access, like you would run your virtual machines behind a router. To use this feature you have the following requirements:

  • Windows 10 and Windows Server 2016 build 14295 or later
  • Enabled Hyper-V role

To enable you can first create an internal switch using PowerShell, the the IP Address on the Virtual NIC on the Management OS and then set the NAT configuration:

To create NAT forwarding rules you can for example use the following command:

Virtual Battery for Virtual Machines

Hyper-V VM battery

With the Windows 10 Insider Build XXXX and later with the release of the Windows 10 Fall Creators Update, Microsoft enabled a Virtual Battery feature for Hyper-V Virtual Machines. This will allow Hyper-V VMs to see the battery status of the host. This is great when you are running Hyper-V on a notebook or if you have a SUV battery on your server

Hyper-V VMConnect – Enhanced Session Mode

Hyper-V Enhanced Session Mode

Interacting with Virtual Machines can be difficult and time consuming using the default VM console, since you can not copy paste or connect devices. VMConnect lets you use a computer’s local resources in a virtual machine, like a removable USB flash drive or a printer and in addition to this, Enhanced session mode also lets you resize the VMConnect window and use copy paste. This makes it almost as if you would use the Remote Desktop Client to connect to the Virtual Machine, without a network connection, instead you will make use of the VMBus.

The Enhanced Session Mode feature was introduced with Windows Server 2012 R2 and Windows 8.1. Enhanced session mode basically provides your Virtual Machine Connection with RDP (Remote Desktop Protocol) capabilities over the Hyper-V VMBus, including the following:

  • Display Configuration
  • Audio redirection
  • Printer redirection
  • Full clipboard support (improved over limited prior-generation clipboard support)
  • Smart Card support
  • USB Device redirection
  • Drive redirection
  • Redirection for supported Plug and Play devices

Requirements for the Enhanced Session Mode are:

  • The Hyper-V host must have Enhanced session mode policy and Enhanced session mode settings turned on
  • The computer on which you use VMConnect must run Windows 10, Windows 8.1, Windows Server 2016, or Windows Server 2012 R2 or higher
  • The virtual machine must have Remote Desktop Services enabled, and run Windows 8.1 (or higher) and Windows Server 2012 R2 (or higher) as the guest operating system.

You can simply use it, by pressing the enhanced session button (if you have all the requirementsOn the Windows 10 Client this is enabled by default on the “host”. On Windows Server you have to enable it first in the Hyper-V Manager under Hyper-V Settings

Hyper-V Manager Zoom Level

Hyper-V VMConnect Zoom Level

In the Windows 10 Creators Update, Microsoft introduced a new feature to the VMConnect Console. This feature allows you to control the zoom level of the Virtual Machine console, this is especially handy if you have a high DPI screen.

Virtual TPM Chip

Hyper-V Virtual TPM

If you are running Windows 10 or Windows Server 2016 or higher you can make use of a feature called Shielded Virtual Machines. This allows you to protect your virtual machines form being accessed from the outside. With this feature Microsoft added different levels of security enhancements. One of them is the possibility to add a Virtual TPM chip to the virtual machine. With that enabled you can use BitLocker or another encryption technology to encrypt your virtual machine disks from inside the VM.

Enable Hyper-V vTPM PowerShell

You can enable the Virtual TPM chip using the Hyper-V Manager or PowerShell. The virtual machine needs to be shut down.

Just to make sure, if you really need full protection, have a look at Shielded Virtual Machines with the Host Guardian Service (HGS).

VM Resource Metering

Hyper-V VM Resource Metering

With Windows Server 2012 Hyper-V Microsoft introduced a new feature in Hyper-V called VM Resource Metering which allows you to measure the usage of a virtual machine. This allows you to track CPU, Memory, Disk and network usage. This is a great feature especially if you need to do charge back or maybe even for trouble shooting.

You can enable VM Resource Metering using PowerShell

To measure the virtual machine, you can used the following command

Export and Share Hyper-V Virtual Machines

Export and Share Hyper-V Virtual Machine

Another feature a lot of people do not know about is that you can export Hyper-V Virtual Machines to copy them to another computer or server. The great thing about this, this can even be done while the virtual machine is running and you can even export the state of the virtual machine with it. You can use the UI to do this, or you just run PowerShell using the Export-VM cmdlet.

In the Windows 10 Fall Creators Update Microsoft also added a button to shared the Virtual Machine. This does not only export the virtual machine but it also create a compressed VM Export File (.vmcz).

Hyper-V Containers

Hyper-V Windows Containers

In Windows 10 and Windows Server 2016 you can run Windows Containers using Docker. While on Windows Server you can choose between running a Windows Container or a Hyper-V Container, you will always run a Hyper-V Container on Windows 10. While Hyper-V Containers and Windows Containers are fully compatible with each other, what means you can start a Windows Container in a Hyper-V Container runtime and the other way around, the Hyper-V Container gives you an extra layer of isolation between your containers and your operating system. This makes running containers not just much more secure but since the Windows 10 Fall Creators Update and Windows Server RS3 (Redstone 3), it will also allow you to run Linux Containers on a Windows Container Host, which will make Windows the best platform to run Windows Containers and Linux Containers side by side.

I hope this short list was helpful and showed you some features you didn’t know were there in Hyper-V. Some of these features are still in preview and are might not available in production versions of Hyper-V. Leave your favorite secret Hyper-V features in the comments!



Windows Server Software-Defined Datacenter Solutions

I am sure you have heard already about the great new improvements of Windows Server 2016 which launched almost a year ago. Especially features like Hyper-V, Storage Spaces Direct, Storage Replica and the Software-Defined Networking part got some great updates and new features. Windows Server delivers a great foundation for your Software-Defined Datacenter.

  • Compute – Hyper-V delivers a highly scalable, resilient and secure virtualization platform.
  • Storage – Storage Spaces Direct (S2D), Storage Replica and ReFS file system improvements, deliver a affordable high-performance software-defined storage solution
  • Network – The new Windows Server Software-Defined Networking v2 stack, delivers a high performance and large scale networking solution for your datacenter

However, deploying a Software-Defined Datacenter can be challenging and expensive. The Microsoft Software-Defined Datacenter certification allows you to simplify deployment and operations with a certified partner solutions. I have worked on a couple of deployments and building complex solutions can be expensive and time consuming. The Microsoft Software-Defined Datacenter certification allows you to have a pre-validated solution which result in faster deployment times, accelerated the time to value, a more reliable solution and optimized performance.

Windows Server Software-Defined Solutions WSSD

Microsoft is working with different partners like DataOn, Dell EMC, Fujitsu, HPE, Lenovo, Quanta (QCT) and SuperMicro to deliver these solutions. Partners offer an array of Windows Server Software-Defined (WSSD) solutions that work with Window Server 2016 to deliver high-performance storage or hyper-converged infrastructure. Hyper-converged solutions bring together compute, storage, and networking on industry-standard servers and components, which means organizations can gain improved datacenter intelligence and control while avoiding the costs of specialized high-end hardware.

Three types of Windows Server Software-Defined (WSSD) solutions

These partners offer three types of Windows Server Software-Defined (WSSD) solutions:

  • Software Defined Storage (SDS) – Enterprise-grade shared storage solution built on server node clusters replaces traditional SAN/NAS at a much lower cost. Organizations can quickly add storage capacity as needs grow over time. Support for all-flash NVMe drives delivers unrivaled performance.
  • Hyper-Converged Infrastructure (HCI) Standard – Highly virtualized compute and storage are combined in the same server node cluster, making them easier to deploy, manage, and scale. By eliminating traditional IT compute, storage, and networking silos, you can simplify your infrastructure.
  • Hyper-Converged Infrastructure (HCI) Premium – Comprehensive “software-defined datacenter in a box” adds Software-Defined Networking and Security Assurance features to HCI Standard. This makes it easy to scale compute, storage, and networking up and down to meet demand just like public cloud services.

Windows Server Software-Defined solution features comparison

These three types offer different features depending on your needs.

Windows Server Software-Defined Solution

If you are thinking do build your next software-defined datacenter or private cloud, I recommend that you have a look at these solutions. Find a partner at www.microsoft.com/wssd

Download a white paper about Microsoft hyper-converged technologies

Read a datasheet about the Windows Server Software Defined partner program

(Image Credits: www.microsoft.com/wssd)



Install Hyper-V on Windows Server using PowerShell

Install Hyper-V on Windows Server using PowerShell

If you want to install Hyper-V on Windows Server you can use the following PowerShell command to install the Hyper-V role. If you want to run Hyper-V, make sure your server does include the following requirements.

  • 64-bit Processor with Second Level Address Translation (SLAT)
  • CPU support for VM Monitor Mode Extension (VT-c on Intel CPU’s)
  • Processors with Intel Virtualization Technology (Intel VT) or AMD Virtualization (AMD-V) technology
  • Hardware-enforced Data Execution Prevention (DEP) must be available and enabled. Intel: XD bit (execute disable bit) AMD: NX bit (no execute bit)
  • Minimum of 4 GB memory

If you are looking for installing Hyper-V on Windows 10, check the following blog post: Install Hyper-V on Windows 10 using PowerShell

 



Install Hyper-V on Windows 10 using PowerShell

Install Hyper-V on Windows 10 using PowerShell

On since Windows 8 you can run Hyper-V on your desktop, laptop or Windows tablet. To install or enable Hyper-V on your Windows 10 machine, you just need to have the following requirements:

  • Windows 10 Enterprise, Professional, or Education (Home does not have the Hyper-V feature included)
  • 64-bit Processor with Second Level Address Translation (SLAT)
  • CPU support for VM Monitor Mode Extension (VT-c on Intel CPU’s)
  • Minimum of 4 GB memory

The easiest way to enable Hyper-V on Windows 10 is to run the following PowerShell command as an administrator:

or you can use the following CMD DISM command:

If you are looking for installing Hyper-V on Windows Server, check the following blog post: Install Hyper-V on Windows Server using PowerShell



Azure Nested Virtualization

How to setup Nested Virtualization in Microsoft Azure

At the Microsoft Build Conference this year, Microsoft announced Nested Virtualization for Azure Virtual Machines, and last week Microsoft announced the availability of these Azure VMs, which support Nested Virtualization. Nested Virtualization basically allows you to run a Hypervisor in side a Virtual Machine running on a Hypervisor, which means you can run Hyper-V within a Hyper-V Virtual Machine or within a Azure Virtual Machine, kind a like Inception for Virtual Machines.

Azure Nested Virtualization

You can use Nested Virtualization since Windows Server 2016 or the same release of Windows 10, for more details on this, check out my blog post: Nested Virtualization in Windows Server 2016 and Windows 10

With the release of the Azure Dv3 and Ev3 VM sizes:

  • D2-64 v3 instances are the latest generation of General Purpose Instances. D2-64 v3 instances are based on the 2.3 GHz Intel XEON ® E5-2673 v4 (Broadwell) processor and can achieve 3.5GHz with Intel Turbo Boost Technology 2.0. D2-64 v3 instances offer the combination of CPU, memory, and local disk for most production workloads.
  • E2-64 v3 instances are the latest generation of Memory Optimized Instances. E2-64 v3 instances are based on the 2.3 GHz Intel XEON ® E5-2673 v4 (Broadwell) processor and can achieve 3.5GHz with Intel Turbo Boost Technology 2.0. E2-64 v3 instances are ideal for memory-intensive enterprise applications.

With the upgrade to new Intel Broadwell processors, Microsoft enabled Nested Virtualization, which will allows a couple of different scenarios, when you create a Virtual Machine running Windows Server 2016.

  • You can run Hyper-V Containers (Windows Containers with additional isolation) inside an Azure VM. With future releases we will also be able to run Linux Containers in Hyper-V Containers running on a Windows Server OS.
  • You can quickly spin up and shut down new demo and test environments, and you only pay when you use them (pas-per-use)

How to Setup Nested Virtualization in Azure

Deploy Azure VM

To setup Nested Virtualization inside an Azure Virtual Machine, you first need to create a new Virtual Machines using one of the new instance sizes like Ev3 or Dv3 and Windows Server 2016.I also recommend to install all the latest Windows Server patches to the system.

Optional: Optimize Azure VM Storage

This step is optional, but if you want to better performance and more storage for your Nested Virtual Machines to run on, this makes sense.

Azure VM Data Disks

In my case I attached 2 additional data disks to the Azure VM. Of course you can choose more or different sizes. Now you can see 2 new data disk inside your Azure Virtual Machine. Do not format them, because we gonna create a new storage spaces pool and a simple virtual disk, so we get the performance form both disks at the same time. In the past this was called disk striping.

Azure VM Storage Spaces

With that you can create a new Storage Spaces Storage Pool and a new Virtual Disk inside the VM using the storage layout “Simple” which basically configures it as striping.

Azure VM Storage Spaces PowerShell

I also formatted the disk and set the drive letter to V:, this will be the volume where I will place my nested virtual machines.

Install Hyper-V inside the Azure VM

Install Hyper-V on Windows Server using PowerShell

The next step would be to install the Hyper-V role in your Azure Virtual Machine. You can use PowerShell to do this since this is a regular Windows Server 2016.This command will install Hyper-V and restart the virtual machine.

Azure VM Hyper-V

After the installation you have Hyper-V installed and enabled inside your Azure Virtual Machine, now you need to configure the networking for the Hyper-V virtual machines. For this we will use NAT networking.

Configure Networking for the Nested Environment

Hyper-V NAT Network inside Azure VM

To allow the nested virtual machine to access the internet, we need to setup Hyper-V networking in the right why. For this we use the Hyper-V internal VM Switch and NAT networking. I described this here: Set up a Hyper-V Virtual Switch using a NAT Network

Create a new Hyper-V Virtual Switch

First create a internal Hyper-V VM Switch

Configure the NAT Gateway IP Address

The Internal Hyper-V VM Switch creates a virtual network adapter on the host (Azure Virtual Machine), this network adapter will be used for the NAT Gateway. Configure the NAT gateway IP Address using New-NetIPAddress cmdlet.

Configure the NAT rule

After that you have finally created your NAT network and you can now use that network to connect your virtual machines and use IP Address from 172.21.21.2-172.21.21.254.

Now you can use these IP Addresses to assign this to the nested virtual machines. You can also setup a DHCP server in one of the nested VMs to assign IP addresses automatically to new VMs.

Optional: Create NAT forwards inside Nested Virtual Machines

To forward specific ports from the Host to the guest VMs you can use the following commands.

This example creates a mapping between port 80 of the host to port 80 of a Virtual Machine with an IP address of 172.21.21.2.

This example creates a mapping between port 82 of the Virtual Machine host to port 80 of a Virtual Machine with an IP address of 172.21.21.3.

Optional: Configure default Virtual Machine path

Since I have created an extra volume for my nested virtual machines, I configure this as the default path for Virtual Machines and Virtual Hard Disks.

Create Nested Virtual Machines inside the Azure VM

Azure Nested Virtualization

Now you can basically start to create Virtual Machines inside the Azure VM. You can for example use an existing VHD/VHDX or create a new VM using an ISO file as you would do on a hardware Hyper-V host.

Some crazy stuff to do

There is a lot more you could do, not all of it makes sense for everyone, but it could help in some cases.

  • Running Azure Stack Development Kit – Yes Microsoft released the Azure Stack Development Kit, you could use a large enough Azure virtual machine and run it in there.
  • Configure Hyper-V Replica and replicate Hyper-V VMs to your Azure VM running Hyper-V.
  • Nested a Nested Virtual Machine in a Azure VM – You could enable nesting on a VM running inside the Azure VM so you could do a VM inside a VM inside a VM. Just follow my blog post to created a nested Virtual Machine: Nested Virtualization in Windows Server 2016 and Windows 10

In my opinion Nested Virtualization is mostly help full if you run Hyper-V Containers, but it also works great, if you want to run some Virtual Machines inside a Azure VM, for example to run a lab or test something.



Hyper-V VM battery

Hyper-V gets Virtual Battery support

Last week Microsoft announced Windows 10 Insider Preview build 16215 which added a lot of new features to Windows 10. With Windows 8 Microsoft brought Hyper-V to the Windows Client Operating System, and with the Windows 10 Insider Program we can also see some Hyper-V preview features coming to live. Previously we could see feature like Nested Virtualization and more in the Windows client builds before we seen them in the server releases. With Windows 10 Insider Preview build 16215, Hyper-V gets virtual battery support, which means you can now see your machine’s battery state in your VMs. This is especially handy if you run Virtual Machines on your notebook. My guess would be, that this could also be used on server for battery support and automatic shutdown.

To enable the feature inside the Virtual Machine you have to create a Prerelease Virtual Machine using PowerShell.

Hyper-V Prerelease Virtual Machine

You can use the following PowerShell command to create a Prerelease Virtual Machine. Please remind yourself that prerelease virtual machines are not supported in production and may fail across updates.

You can now see that the Virtual Machine now has version number 254.0, which adds some hidden new features like virtual battery support.

Prerelease Virtual Machine Hyper-V Manager

My guess is that this could be available automatically per default in all virtual machines in the final version of the Windows 10 Fall Creators Update.



AzureStack Admin Portal

Microsoft Azure Stack – Azure Extension in your Datacenter

A couple of weeks ago, I had the chance to attend the Microsoft Azure Certified for Hybrid Cloud Airlift in Bellevue WA, which is close to the Microsoft campus in Redmond. I had the chance to spend the week there and talk with the Microsoft PG about different Azure Stack scenarios. Most of the discussions and presentations are under NDA, but there are a few things I can share, since they are publicly announced. I prepared this blog post already a couple of months ago, when I was talking to a lot of different customers about Azure Stack, and since then Microsoft also shared some new information about the release of Azure Stack Technical Preview 3.

The Azure Stack Announcement

Azure vs Azure Stack

Microsoft announced Azure Stack at Microsoft Ignite in May 2015. Back at this time Microsoft did only mention about the vision of Azure Stack and that Azure Stack will bring cloud consistency between the Microsoft Azure Public Cloud and your Private Cloud. But Microsoft did not really announce exactly what Azure Stack will be and how it will be implemented in your Datacenter.

During the Microsoft World Wide Partner Conference (WPC 2016), Microsoft announced more information about the availability of Azure Stack. For more information, you can read the Microsoft blog posts, but I tried to summarize the most important parts.

Building a true Hybrid Cloud and Consistency with Microsoft Azure

Azure Stack

This is probably the most important part about Azure Stack today. Microsoft Azure Stack will bring Azure consistency between the Microsoft Azure Public Cloud and your Private Cloud or your Hosters Service Provider Cloud using the Azure Resource Manager. So you will be able to not only operate an Azure-like environment, like you could with Windows Azure Pack and System Center, you now get real consistency between Azure and Azure Stack. You not only get the exact look and feel from the Microsoft Azure Public Cloud, you also can use the same Azure Resource Templates and deployment methods as you can in the Public Cloud. This allows customers to really operate in a Hybrid Cloud environment, between the Microsoft Public Cloud, their own Private Cloud and also local Service Provider Clouds.

Bring the agility and fast-paced innovation of cloud computing to your on-premises environment with Azure Stack. This extension of Azure allows you to modernize your applications across hybrid cloud environments, balancing flexibility and control. Plus, developers can build applications using a consistent set of Azure services and DevOps processes and tools, then collaborate with operations to deploy to the location that best meets your business, technical, and regulatory requirements. Pre-built solutions from the Azure Marketplace, including open source tools and technologies, allow developers to speed up new cloud application development.

The Integrated System Approach

Azure Stack Integrated System

(picture by Microsoft)

Microsoft announced that Azure Stack will be available as an appliance from different hardware vendors in Mid 2017. The confirmed hardware providers delivering Azure Stack Appliance at this point in time will be: Dell EMC, HPE and Lenovo and later in 2017 we will also see an appliance from Cisco, Huawei and Avanade.

The big difference here is that Microsoft delivers the Azure Stack platform first in an appliance way, which is really different from the way they delivered Windows Azure Pack. Windows Azure Pack was based on System Center and Windows Server and every customer could design his own environment based on their needs.

This was great, but also had some huge challenges for customers. Clouds needed different designs, this ended up in very complex design workshops where we basically discussed the customer solutions. The installation and configuration of a Windows Azure Pack platform was also very complex and a lot of work which needed a lot of resources, knowledge and of course a lot of project costs. Before customers could start saving money, they had to invest money to get things up and running. Of course, system integrators like itnetX and others, built automation to spin up clouds based on Windows Azure Pack, but still the investment needed to be done.

The use of an appliance approach not only helps to spin up clouds faster, but also build environments on tested hardware, firmware and drivers. Another point here which makes a great case for an appliance solution, are management and operations. Management and operation of a cloud-like environment is not easy, doesn’t matter what software you are using. Keeping the platform stable, maintained and operational will end up in a lot of work, especially if every cloud looks different. The last thing I want to mention here is upgrading, if you want real Azure consistency, you need to keep up with the ultra-fast pace of the Azure Public Cloud, which is basically impossible or extremely expensive. An integrated system scenario can really help you keep things up-to-date, since updates and upgrades can be pre-tested before they are released for you to deploy. This will help you save a huge amount of testing since every environment looks the same.

Operating Azure Stack

Azure Stack Administration and Operation

As already mentioned, Azure Stack will be delivered as an integrated system. OEMs, will help you to setup and install your Azure Stack appliance in your datacenter, but they will not fully manage the Azure Stack environment. You will need to have some Cloud Operator managing and operating your Azure Stack. With this all the host will be sealed and administrators do not have access to the hosts or Hyper-V Manager or Failover Cluster Manager to mange the systems. Instead, Administrators or Cloud Operators will manage the system for a management portal.

Azure Stack Platform

Since this is an integrated system, you don’t even need to care what it is running in the background. But still for a lot of us it is still very interesting to see how Azure Stack is built. In the back Azure Stack runs on “common” rack mount servers from HPE, Dell, Lenovo and Cisco, for HPE this is the DL380 Gen9. From the software stack it is running Windows Server 2016, and the Software Define Datacenter features such as Storage Spaces Direct, the new Windows Server 2016 Software-Defined Networking Stack an Hyper-V. In the release version of Azure Stack we will see a Hyper-Converged Storage Spaces Direct architecture starting from 4 nodes. On top of this Microsoft used code from Azure to bring the Azure Resource Manager, Azure Resource Providers and the Azure Portal to the Azure Stack.

Azure Stack POC – Microsoft Azure Stack Development Kit

Azure Stack Development Kit

Very early in the development process of Azure Stack, Microsoft releases Technical Previews to customers, so they could test Azure Stack on one node deployments. This is called the Azure Stack POC and you can download it today on a single physical server, and it was only designed for non-productive, non-HA environments. Microsoft officially announced that they will rename the Azure Stack POC to Azure Stack Development Kit after the General Availability of Azure Stack Mid 2017. This is really a great solution to quickly spin up a test environment of Azure Stack without having to invest in hardware.

Azure Marketplace Syndication

Azure Stack Marketplace Syndication

You will be able to create your own Marketplace items in Azure Stack, building your own templates and images and offer them to your customers. One of the greatest editions Microsoft made in the Azure Stack Technical Preview 3 is the Azure Marketplace Syndication. This allows you to get Marketplace items from Azure and offer them in your Azure Stack offering to your customers. With that you don’t need to build all Marketplace items by yourself.

Azure Stack Identity Management

Azure Stack has to be integrated into your datacenter. In terms of Identity, Microsoft allows you to use two ways to integrate. First, and from my site the preferred option, is Azure AD (AAD) which allows you to integrate with an existing Azure Active Directory. Azure AD can be synced and connected with your on-premise Active Directory and this will allow you to login to Azure as well as Azure Stack. The other option Microsoft is offering is using ADFS to bring identities to your Azure Stack.

The Azure Stack Business Cases

Since Azure Stack is consistent with Microsoft Azure, the question comes up, why are we not just using Azure. There are many good reasons to use Azure, but there are also some challenges with that. Azure Stack can make sense in a couple of scenarios.

  • Data Sovereignty – In some cases data cannot be stored outside of a specific country. With Azure Stack, customers have the option to deploy in even their own datacenter or on a service provider within the same country.
  • Latency – Even Microsoft offers a solution to reduce network latency to Azure, with using Azure Express Route, in some scenarios latency is still a big issue. With Azure Stack can customers place Azure very close to the location where resources are accessed from.
  • Disconnected Scenarios – In some scenarios you really want to benefit form the consistent deployment model, and for example use Azure Resource Manager (ARM), but not everywhere on earth do you have access to Azure or sometimes you have a very bad connection. Think about cruise ships or other scenarios where you need to run IT infrastructure but you are not able to connect to Azure.
  • Private Instance of Azure – For some companies shared infrastructures can be challenging, even security standards in Azure are extremely high, it is not always an option. With Azure Stack, companies can basically spin up their completely own instance of Azure.
  • Differentiation – Service Providers or even Enterprise companies cannot only use the Azure Marketplace, but they can also build their own solutions for the Azure Stack and make them available to their customers.

Pricing and Licensing

As mentioned Microsoft will offer Azure Stack from 5 different OEMs. HPE, Dell and Lenovo will deliver a solution at Azure Stack GA in mid-CY17, Cisco and Huawei will be available later. The hardware needs to be bought directly from the OEM or Partner. Some of the also offer a flexible investment model like the HPE Flexible Capacity. For the pricing model of Azure Stack software, Microsoft decided to deliver the licensing of Azure Stack on a pay-per-use base. This meets of course the cloud economics and there will be no upfront licensing costs for customers. Services will be typically metered on the same units as Azure, but prices will be lower, since customers operate their own hardware and facilities. For scenarios where customers are unable to have their metering information sent to Azure, Microsoft will also offer a fixed-price “capacity model” based on the number of cores in the system.

Azure Stack will be offered in two different models, Pay-as-you-use model and Capacity model. The pay-as-you-use model is licensed by Microsoft via the Enterprise Agreement (EA) or Cloud Service Provider (CSP) programs. The capacity model is available via EA only. It is purchased as an Azure Plan SKU via normal volume licensing channels. For typical use cases, Microsoft expects the pay-as-you-use model to be the “most economical” option.

The Azure Stack pricing models

Azure Stack will be offered in two different models, Pay-as-you-use model and Capacity model. The pay-as-you-use model is licensed by Microsoft via the Enterprise Agreement (EA) or Cloud Service Provider (CSP) programs. The capacity model is available via EA only. It is purchased as an Azure Plan SKU via normal volume licensing channels. For typical use cases, Microsoft expects the pay-as-you-use model to be the “most economical” option.

Azure Stack Pay-as-you-use model

For the pay-as-you-use model you will you can take advantage of the cloud economics and only pay for resources which are actually consumed, plus additional costs for the Azure Stack hardware and the operations.

Service prices:

  • Base virtual machine $0.008/vCPU/hour ($6/vCPU/month)
  • Windows Server virtual machine $0.046/vCPU/hour ($34/vCPU/month)
  • Azure Blob Storage $0.006/GB/month (no transaction fee)
  • Azure Table and Queue Storage $0.018/GB/month (no transaction fee)
  • Azure App Service (Web Apps, Mobile Apps, API Apps, Functions) $0.056/vCPU/hour ($42/vCPU/month)

Azure Stack Capacity model

For the capacity model, two packages are available which makes you license the physical cores of your Azure Stack system via an annual subscription. The packages are only available via Enterprise Agreement (EA).

  • App Service package ($400/core/year)
    Includes App Service, base virtual machines and Azure Storage
  • IaaS package ($144/core/year)
    Includes base virtual machines and Azure Storage

You will also need additional licenses if you deploy Windows Server and SQL Server virtual machines, like you would do if you are using your traditional Hyper-V servers.

What else will you need

  • Integrated System (hardware) – you will need to purchase the Azure Stack hardware from one of the OEM vendors
  • Support – you will need to purchase support from Microsoft for software support and a support package for the hardware from the hardware provider. If you already have Premier, Azure, or Partner support with Microsoft, your Azure Stack software support is included.
  • Service Providers – Service Provider can also license Azure Stack to others using the CSP (Cloud Solution Provider) channel.

Azure Stack Roadmap

At the Azure Stack GA release this summer, Microsoft will deliver Azure Stack hardware with provides from HPE, Dell and Lenovo. Later in 2017 Microsoft will also deliver Azure Stack with Cisco, Huawei and Avanade hardware. Azure Stack at GA will support 4-12 nodes, 1 single scale-unit and a single region.

Microsoft will also deliver some of the services at General Availability on Azure Stack, and will add more and more services over time. At GA we will see:

  • Virtual Machines
  • Storage (Blob, Table and Queue)
  • Networking (Virtual Networks, S2S VPN, …)
  • App Service (in Preview)
  • SQL (in Preview)
  • MySQL (in Preview)

After GA, Microsoft  will continuously deliver additional capabilities through frequent updates. The first round of updates after GA are focused on two areas: 1) enhanced application modernization scenarios and 2) enhanced system management and scale. These updates will continue to expand customer choice of IaaS and PaaS technologies when developing applications, as well as improve manageability and grow the footprint of Azure Stack to accommodate growing portfolios of applications. Please be reminded that this will not just be a product you purchase, think about it as a service which will add features and functionality over time.

The choice for your datacenter

Windows Azure Pack

Obviously, Microsoft is pushing Azure Stack since it will bring consistency to the Azure public cloud, which means your companies and people need to understand the advantages of using methods like DevOps and Infrastructure in code. This will help you to make the most out of Azure Stack and the Azure Resource Manager. If you already have Microsoft Azure know-how, this is great, because it will also apply to Azure Stack.

No worries, if you are not there yet, or for some reason this doesn’t make sense to you, Microsoft still has a great solution to build traditional Virtualization platforms together with automation using System Center, Windows Server and if needed Windows Azure Pack. Both solutions, System Center and Windows Azure Pack, will be supported in the future and will get updates.