Tag: iSCSI

Hyper-V over SMB

What is Hyper-V over SMB?

With the release of Windows Server 2012 Microsoft offers a new way to store Hyper-V Virtual Machine on a shared storage. In Windows Server 2008 and Windows Server 2008 R2 Hyper-V Microsoft did only offer block-based shared-storage like Fiber channel or iSCSI. With Windows Server 2012 Hyper-V Microsoft allows you to used file-based storage to run Hyper-V Virtual Machine from via the new SMB 3.0 protocol. This means Hyper-V over SMB allows you to store virtual machines on a SMB file share. In the past years I did a lot of Hyper-V implementations working with iSCSI or Fiber channel storage, and I am really happy with the new possibilities SMB 3.0 offers.

The common problem of block storage is that the Hyper-V host has to handle the storage connection. That means if you use iSCSI or fiber channel you have to configure the connection to the storage on the Hyper-V host for example multipath, iSCSI initiator or DSM software. With Hyper-V over SMB you don’t have to configure anything special because SMB 3.0 is built-in to Windows and supporting features like SMB Multichannel are activated and used by default. Of course you have to do some design considerations but this is much less complex than an iSCSI or Fiber Channel implementation.

How did they make it work

The first thing which was important was speed. SMB 3.0 offers a huge performance increase over the SMB 2.x protocol and you totally have to think about it in a different way. There are also a lot of other features like SMB Direct (RDMA), SMB Multichannel or Transparent Failover and many more which help in terms of performance, security and availability, but more on this supporting features in the next post.
Hyper-V over SMB Multichannel

Why Hyper-V over SMB?

Well I already mentioned a lot of reasons why you should use Hyper-V over SMB, but if you think about it there are there main reasons why you should use it.

Costs – Windows Server 2012 Hyper-V allows you to build cluster up to 64 nodes and if you build a clusters this size with fiber channel storage this will be quiet an investment in terms of fiber channel hardware such as HBAs, Switches and cables. By using Hyper-V over SMB you can reduce cost for infrastructure dramatically. Sure maybe you have already invested in a fiber channel storage and a fiber channel infrastructure and you don’t have to change that. For example if you have 100 Hyper-V hosts you may have about 200 HBAs and you also need fiber channel switches. What you could do with Hyper-V over SMB, you could create a Scale-Out File Server Cluster with 8 nodes which are attached to the fiber channel and present the storage to the Hyper-V hosts by using a SMB file share. This would save you a lot money.

Flexibility – Another point which I already mentioned is flexibility. By using Hyper-V over SMB you are removing the Storage dependency from the Hyper-V host and add the storage configuration to the Virtual Machine. In this case you don’t have to configure zoning or iSCSI initiators which is making life for Virtualization Administrators much easier. Here are two examples how IT teams can reduce complexity by using Hyper-V over SMB. First in small IT departments you may not have a dedicated storage team and if you have to add an new Hyper-V host or if you have to reconfigure your storage this can be a lot of difficult work for some people who haven’t much experience with the storage. In enterprise scenario you may have a dedicated Storage and a dedicated Virtualization team and in the most cases they have to work really closely together. For example if the Virtualization team adds another Hyper-V host, the Storage team has to configure the Storage for the host on the Storage site. If the Storage team makes changes to the Storage the Virtualization team eventually has to make changes to the Hyper-V hosts. This dependencies can be reduced by adding a layer between Storage and the Hypervisors and in this case this could be a Scale-Out File Server.

Technology – The third point in my list is technology. Microsoft is not really mention this point but since I have worked with different options like iSCSI, fiber channel or SMB I am a huge fan of SMB 3.0. Fiber channel is a great but expensive technology and people who have worked with iSCSI know that there can be a lot of issues in terms of performance. SMB 3.0 has some great supporting features which can help you increase performance, RDMA which is a technology which can increase networking performance by multiple times and SMB Multichannel which allows you to use multiple network adapters for failover and load balancing are working very well and let you make the most out of your hardware. Another part can be security if you think about encrypting iSCSI networks via IPsec you know that this can be something complex, with SMB Encryption there is a very easy solution for that on the SMB scenario.

I hope I could give you a quick introduction to Hyper-V over SMB and why it’s a good idea consider this in your deployment plans. In the next post I will quickly summarize the supporting features in SMB 3.0.



How to build a iSCSI Target Cluster on Windows Server 2012

Windows Server 2012 Logo

In Windows Server 2012 Microsoft introduced the new iSCSI Target which is now build in to Windows Server 2012 which allows you to connect to storage presented by your Windows Server.

There are a lot of new way how you can present storage to your servers especially for Hyper-V. With Windows Server 2012 Hyper-V you can use block storage like iSCSI or Fiber channel or the new introduced SMB 3.0 file storage as your shared storage for your Hyper-V Clusters. Now I am a huge fan of the new SMB 3.0 solutions which allows you to place Hyper-V virtual machines on a SMB file share, but there maybe other applications and scenarios where you need to present storage via iSCSI.

The new iSCSI Target which is build in to Windows Server 2012 is pretty cool. If you are interested to use the Windows Server 2012 iSCSI Target on a stand-alone host in your lab you should checkout my blog post: Create a Windows Server 2012 iSCSI Target Server 

However if you need to run the iSCSI Target in a production environment you will have a single point of failure and in this case you should cluster your iSCSI Target. To build a iSCSI Target Cluster is pretty simple, first install all the roles on both cluster nodes. After this create a new Failover Cluster as you would with Hyper-V or other applications.

If your cluster is up and running you can now add the iSCSI Target Server role.

iscsi Target Cluster 01

Setup the iSCSI Target with a IP address and a name.

iscsi Target Cluster 02

Choose the Cluster Storage which should be used for your iSCSI Target. Later you will setup VHDs on this shared Cluster Disk.

iscsi Target Cluster 03

After you have checked the summary the iSCSI Target Server role will be created.

iscsi Target Cluster 05

The iSCSI Target Server role has been created the storage you have added to the iSCSI Target will be assigned to it.

iscsi Target Cluster Storage

The ISCSI Target Server resource will be online. It’s also highly recommended that you use multiple NICs for you hosts and also use MPIO on the machines which will connect to your iSCSI Target.

iscsi Target Cluster iSCSI Target Server Role

The iSCSI Targets have to be created back in Server Manager. Connect to the cluster node where the iSCSI Target Server is running on.

iSCSI Virtual Disk Server Manager

Select the new space where your Virtual Disk should be placed. The wizard will automatically detected the Cluster role in my case “ISCSI02” and the volume which is attached to this role in my case “Volume E:”

iSCSI Virtual Disk 02

After this is done you have to enter the name of the this and if you don’t have one already you have to create a iSCSI Target.

You can connect multiple disks to a iSCSI Target and you can create multiple iSCSI Targets on your iSCSI Target Server, and maybe you will create even multiple iSCSI Target Server on your cluster so you can create a “static” load balancing where the Target Server 1 is running on the first host and the Target Server 2 on the second host.

 



Windows Server 2012 Hyper-V Converged Fabric

Windows Server 2012 RC Logo

In Windows Server 2008 R2 we had some really simple configurations and best practices for Hyper-V and network configurations. The problem with this was, that this configurations were not really flexible. This had two main reasons, first NIC teaming wasn’t officially supported by Microsoft and secondly there was no possibility to create virtual network interfaces without third party solution.

Here is a example of a Hyper-V 2008 R2 host design which was used in a cluster setup.

Traditional Design

traditional Hyper-V Host

Each dedicated Hyper-V network such as CSV/Cluster communication or the Live Migration network used a own physical network interface. The different network interfaces could also be teamed with third party software from HP, Broadcom or Intel. This design is still a good design in Windows Server 2012 but there are other configurations which are a lot more flexible.

Microsoft MVP Adian Finn and Hans Vredevoort did a already some early work with Windows Server 2012 Converged Fabric and you should definitely read their blog posts.

In Windows Server 2012 you can get much more out of your network configuration. First of all NIC Teaming is now integrated and supported in Windows Server 2012 and another cool feature is the use of virtual network adapters in the Management OS (Host OS or Parent Partition). This allows you to create for example one of the following designs.

Virtual Switch and Dedicated Management Interfaces

Hyper-V Converged Fabric

This scenario has two teamed 10GbE adapter for Cluster and VM traffic.

Virtual Switch and Dedicated Teamed Management Interfaces

Hyper-V Converged Fabric

The same scenario with a teamed management interface.

Dedicated Virtual Switch for Management and VM Traffic

Hyper-V Converged Fabric

One Virtual Switch for Management and Cluster traffic and a dedicated switch for VM traffic.

One Virtual Switch for everything

Hyper-V Converged Fabric

This is may favorite design at the moment. Two 10GbE adapter as one team for Virtual Machine, Cluster traffic and management. It is a very flexible design and allows the two 10GbE adapters to be used very dynamic.

This design solutions will also be very interesting if you us SMB 3.0 as a storage for Hyper-V Virtual Machines.

FileServer and Hyper-V Cluster

 

There are at the moment not a lot of official information which designs will be unsupported and which will be supported. You can find some information about supported designs in the TechEd North America session WSV329 Architecting Private Clouds Using Windows Server 2012 by Yigal Edery and Joshua Adams.

Configuration

Now after you have seen these designs you may want to create such a configuration and want to know how you can do this. Not everything can be done via GUI you have to use your Windows PowerShell skills. In this scenario I use the design with four 10GbE network adapters 2 for iSCSI and to for my network connections.

  • Install the Hyper-V Role
  • Create NIC Teams
  • Create a Hyper-V Virtual Switch
  • Add new Virtual Network Adapters to the Management OS
  • Set VLANs of the Virtual Network Adapters
  • Set QoS Policies of the Virtual Network Adapters
  • Configure IP Addresses of the Virtual Network Adapters

Install Hyper-V Role

Before you can use the features of the Virtual Switch and can start create Virtual Network Adapters on the Management OS (Parent Partition) you have to install the Hyper-V role. You can do this via Server Manager or via Windows PowerShell.

Create NIC Teams

Now most of the time you will create a NIC Teaming for fault tolerance and load balancing. A team can be created over the Server Manager or PowerShell. Of course I prefer the Windows PowerShell. For a Team which will not only be used for Hyper-V Virtual Machines but also for Management OS traffic I use the TransportPorts as load balancing algorithm. If you use this team only for Virtual Machine traffic there is a algorithm called Hyper-V-Port. The Teaming Mode of course depends on your configuration.

NIC Teaming

 

Create the Virtual Switch

After the team is created you have to create a new Virtual Switch. We also define the DefaultFlowMinimumBandwidthWeight to be set to 20.

VM Switch

 

After you have created the Hyper-V Virtual Switch or VM Switch you will find this switch also in the Hyper-V Manager.

Hyper-V Virtual Switch

 Create Virtual Network Adapters for the Management OS

After you have created your Hyper-V Virtual Switch you can now start adding VM Network Adapters to this Virtual Switch. We also configure the VLAN ID and the QoS policy settings.

VMNetworkAdapter ManagementOS

 

Your new configuration will now look like this:

Network Connections

As you can see the name of the new Hyper-V Virtual Ethernet Adapter is vEthernet (NetworkAdapaterName). This will be important for automation tasks or configuring IP addresses via Windows PowerShell.

Set IP Addresses

Some months ago I wrote two blog posts, the first was how to configure you Hyper-V host network adapters like a boss and the second one was how to replace the netsh command with Windows PowerShell. Now using Windows PowerShell to configure IP addresses will save you a lot of time.

 

There is still a lot more about Windows Server 2012 Hyper-V Converged Fabric in the future, but I hope this post will give you a quick insight into some new features of Windows Server 2012 and Hyper-V.



Create a Windows Server 2012 iSCSI Target Server

Windows Server 8

In my Lab I don’t have a good storage which I can use for my Hyper-V Clusters. But with Windows Server 2012 Microsoft added a lot of new storage features and included a iSCSI Target Server. With the new Storage Pooling / Storage Spaces features this allows me to use a Windows Server as a great storage replacement.

This offers features like:

  • Thin provisioning
  • Data Deduplication
  • Disk aggregation
  • Storage Spaces
  • and a lot more

Overview

  • We will aggregate physical disks to a Storage Pool
  • On this Storage Pool we will create a Virtual Disk. Here we have to option to use Data Deduplication, Thin provisioning, Reliability options (Simple, Mirror, Parity), etc.
  • On the Virtual Disk we will create a NTFS volume
  • On this Volume we will create iSCSI Virtual Disks (LUNs)

Storage Overview



Hyper-V Server: Enable Jumbo Frames on Intel NICs

Hyper-V R2 SP1

If you are using iSCSI as storage connection you can win a lot of performance by enabling jumbo frames. It is important that your Storage, Switch and Network Card do support the use of jumbo frames.

Now if all parts do support jumbo frames you have to enable this on your network adapters.

First you have to enable this for the operating system. This is very simple done with the netsh command line tool.

jumboframes2

Now if you are Intel network cards you have to enable jumbo frames in the registry.

Here you can see all of your network interfaces and you can simply change the “*jumbopacket” value to 9014.

If you don’t now which network interfaces are the iSCSI interfaces you can check the interface GUID here:

jumboframes

If you need more information on iSCSI und Hyper-V check out this blog post.