Last updated by at .

  • Microsoft Azure
  • Virtual Machine Manager

Tag: Virtual Machine

WAP Register SPF

Windows Azure Pack – Virtual Machine Cloud

One of the big features of Windows Azure Pack right now is the integration of a Infrastructure as a Service offering or in other words Virtual Machine Cloud. VM Cloud allows you to integrate your existing System Center Virtual Machine Manager 2012 R2 and Hyper-V environment over SPF (Service Provider Foundation) API, so you can create a offering similar to the Windows Azure IaaS experience.

I had the chance working on several Windows Azure Pack projects where we have integrated the Virtual Machine Cloud and created offerings for service providers as well as for enterprise companies for internal use. Two parts of I really like about the solution in the integration of Hyper-V Network Virtualization and the integration of VM Roles, which are basically a solution to deploy services instead of just Virtual Machines. Microsoft also finally fixed the issue we had in App Controller and other products to connect to a Virtual Machine via the Hyper-V Console from outside your organization by using a Remote Desktop Gateway.

Architecture

To deploy the VM Cloud or IaaS offering in Windows Azure Pack you need several roles, services and components. If you want to know more about the Windows Azure Pack Architecture, check out the following blog post.

Windows Azure Pack VM Cloud Architecture

Picture Source: TechNet

  • Hyper-V – You need a Hyper-V environment for hosting virtual machines.
  • System Center Virtual Machine Manager – In a VM Cloud environment you need your Hyper-V resources to connect to a Virtual Machine Manager. You can connect multiple Virtual Machine Manager servers so called VMM stamps. If you are using Hyper-V Network Virtualization (NVGRE) make sure you build a highly available VMM Cluster for each stamp.
  • Service Provider Foundation – To bring those VMM stamps inside Windows Azure Pack you need an API solution called Service Provider Foundation. Every VMM stamp has to be registered in Windows Azure Pack trough a Service Provider Foundation Endpoint.
  • Windows Azure Pack Tenant Portal – The Portal for tenants/customers to manage Virtual Machines
  • Windows Azure Pack Admin Portal – The Portal for Administrator to register new VMM stamps and create offerings for customers.
  • Service Management API – You always need this if you deploy Windows Azure Pack.
  • SQL Server – SQL Server for Windows Azure Pack, SPF and Virtual Machine Manager
  • RD Gateway – Remote Desktop Gateway for the Console Connection to the Virtual Machine
  • System Center Operations Manager – If you just want to monitor your VM environment or you want to do chargeback you need Operations Manager and Service Reporting.

How to setup VM Cloud in Windows Azure Pack

After you have setup your environment you have to register your Service Provider Foundation and VMM in Windows Azure Pack. Enter the address of the SPF Endpoint and the address of the VMM Server.

WAP Register SPF

You can than add VMM servers or VMM Stamps to the Windows Azure Pack.

VMMStamp in WAP

You can now select the Cloud you want to use for your offering. If you create a new plan you can select which VMM stamp and cloud should be used for the offering. You can limit resources like Virtual Machine count, CPU cores, RAM, Storage, VM Networks, Templates and more inside plans and add-ons. You can than offer these plans and add-ons to your customers.

WAP VM Cloud Plan

As another part you can extend the solution by adding a SMA Web Service endpoint to the Windows Azure Pack and configure it for the Virtual Machine Clouds. With this solution you can link SMA Runbooks to actions in Windows Azure Pack VM Cloud, SPF and Virtual Machine Manager.

WAP Link SMA Runbook to VMM Action

If you need to enable Console access to the Virtual Machine to the tenant users, you also have to register a Remote Desktop Gateway. This will allow user to access the Virtual Machine without having a IP address set inside the VM.

Tenant VM Console Access WAP

Remember there are much more steps you have to do. For example configuring the fabric in System Center Virtual Machine Manager or configuring the Remote Desktop Gateway to have access to the Hyper-V hosts. And if you are doing NVGRE (Hyper-V Network Virtualization) you may also want to have NVGRE Gateways in place so customers can leave the Virtual Network and connect to the physical network or the internet. So setting this thing up is one part but having it designed and configured the right way is another.



Building Clouds

Windows Azure for your Datacenter

Some years back, when Microsoft launched Windows Azure and I was working for a Hosting company, I remember that we were thinking and talking about this and were hoping that Microsoft would make Windows Azure available for hosters. At the beginning of last year Microsoft made this step by releasing Windows Azure Services for Windows Server and together with Windows Server, Hyper-V and System Center you could build your own Windows Azure. With the R2 wave of System Center and Windows Server, Microsoft also renamed Windows Azure Services for Windows Server to Windows Azure Pack (wow what a great idea ;-)) and added some great new functionality to the product it self.

Windows Azure Pack Archtiecture Overview

Windows Azure Pack is a collection of Windows Azure technologies, available to Microsoft customers at no additional cost for installation into your data center. It runs on top of Windows Server 2012 R2 and System Center 2012 R2 and, through the use of the Windows Azure technologies, enables you to offer a rich, self-service, multi-tenant cloud, consistent with the public Windows Azure experience.

The Windows Azure Pack is basically a framework which offers you to build several offerings for customers.

  • VM Cloud – This is an infrastructure-as-a-service (IaaS) offering which allows customer to deploy and manage Windows and Linux Virtual Machines including VM Template, scaling and Virtual Networking options.
  • Web Sites – a service that helps provide a high-density, scalable shared web hosting platform for ASP.NET, PHP, and Node.js web applications. The Web Sites service includes a customizable web application gallery of open source web applications and integration with source control systems for custom-developed web sites and applications.
  • Service Bus – a service that provides reliable messaging services between distributed applications. The Service Bus service includes queued and topic-based publish/subscribe capabilities.
  • SQL and MySQL – services that provide database instances. These databases can be used in conjunction with the Web Sites service.
  • Automation and Extensibility – the capability to automate and integrate additional custom services into the services framework, including a runbook editor and execution environment.

Source: TechNet

On top of this Windows Azure Pack offers two management portals, one for tenants and one for administrators which are build on top of the Service Management API. The Service Management API is a RESTful API which allows you build some custom scenarios such as custom portals or billing integrations on top of the Azure Pack framework.

Windows Azure Pack IaaS

In the last months I had time to work within several different project with the integration of Windows Azure Pack, mainly with the VM Cloud and automation integration and also some work with the Service Management API and some customization together with Stefan Johner and Fulvio Ferrarini from itnetx. I will write some blog post about Windows Azure Pack, the stuff we have done and we are doing right now.

If you are looking for some good blogs around Windows Azure Pack you should definitely checkout the blogs from Marc van Eijk, Hans Vredevoort and Kristian Nese or the Windows Azure Pack Wiki on TechNet. And btw. Windows Azure Pack is not just made for hoster and service providers, it is also a great solution for enterprises, check out why by reading Michael Rueeflis blog.

 



Capacity Planner for Hyper-V Replica

Capacity Planner for Hyper-V Replica updated

Back in 2013 Microsoft released a tool called Capacity Planner for Hyper-V Replica. Hyper-V Replica Capacity Planner allowed IT Administrators to measure and plan their Replica integration based on the workload, storage, network, and server characteristics. Today Aashish Ramdas announced on the TechNet Virtualization blog that Microsoft has updated the Hyper-V Replica Capacity Planner. The new version now support Windows Server 2012 R2 Hyper-V, Windows Azure Hyper-V Recovery Manager and some other cool stuff based on the feedback of customers.

  • Support for Windows Server 2012 and Windows Server 2012 R2 in a single tool
  • Support for Extended Replication
  • Support for virtual disks placed on NTFS, CSVFS, and SMB shares
  • Monitoring of multiple standalone hosts simultaneously
  • Improved performance and scale – up to 100 VMs in parallel
  • Replica site input is optional – for those still in the planning stage of a DR strategy
  • Report improvements – e.g.: reporting the peak utilization of resources also
  • Improved guidance in documentation
  • Improved workflow and user experience

It’s great to see Microsoft improving free tools which help implement their solutions.



System Center Logo

Known Issues for Virtual Machine Manager in System Center 2012 R2

On TechNet, Microsoft provides a list of Know Issues with Virtual Machine Manager in Release Notes for Virtual Machine Manager in System Center 2012 R2. If you have trouble or bugs in System Center 2012 R2 Virtual Machine Manager you should definitely check out that page. Here is a short list from 20.01.2014:

  • File servers under management will go into an unknown state after upgrade
  • VMs with shared VHDXs are displayed as “Incomplete VM Configuration”
  • File Server VM migration fails from an MSU host, resulting in Incomplete state
  • A VM on a VMware ESX host cannot be assigned to a cloud while it is running
  • Windows Server Gateway: all gateway virtual machines on a given host cluster must use same back-end network
  • Cannot manage Spaces storage for Scale-out file servers on Windows Server 2012
  • Operations Manager Health Service Might Restart Creating Inaccurate Chargeback Metrics
  • Cannot enter or change application and SQL Server settings directly in VM templates
  • Cannot manage Spaces storage for Scale-out file servers on Windows Server 2012
  • Disk classifications not displayed correctly after a new LUN has been registered
  • VMM no longer supports VDS Hardware ProvidersVMM cannot manage General Use file servers on Windows Server 2012 R2
  • VMM does not manage Storage Tiering in Windows Server 2012 R2
  • VMM does not manage Write-back cache in Windows Server 2012 R2
  • File Server Tasks Not Supported on Untrusted Nodes
  • Incorrect Server Error Code Returned for Invalid Library Paths
  • Windows Azure Hyper-V Recovery Manager Does Not Accept Replication Frequency Changes
  • VMM does Not Provide Centralized Management of WWN Pools
  • Windows Server Operating System MP Disabled by Default
  • Service deployment fails and the guest agent on virtual machines does not work as expected
  • Management of VMs deployed directly on NPIV-exposed LUNs is not supported
  • Windows PowerShell help might not open as expected on computers running
  • Deploying virtual machines to hosts on perimeter networks might fail
  • Registering a storage file share on a library server might cause an error
  • Failing over and migrating a replicated virtual machine on a cluster node might result in an unstable configuration
  • Modifying Tenant Administrator permissions affects permissions for self-service user roles
  • Member-level permissions for network quotas are not applied for the Tenant Administrator role
  • Canceling and restarting when creating a new virtual machine might fail
  • BlogEngine service deployment fails

Check out the TechNet article for workaround or answers.



Hyper-V 2012 R2 Poster

TechNet Switzerland Event: From VMware to Hyper-V

On Tuesday, December 03 I will present together with Markus Erlacher, former Microsoft Switzerland TSP and now Managing Director at itnetx gmbh, on a free Microsoft Switzerland TechNet event. The topic this time will be why and how you migrate from VMware to a Microsoft Hyper-V and System Center environment. The event will cover an overview about Windows Server 2012 R2 Hyper-V and System Center 2012 R2 and all the Virtualization features you need in your environment. At the afternoon session we will also cover how you can migrate from VMware to Hyper-V so you can quickly enjoy the new Private Cloud solutions from Microsoft.

The event is free and in will be in the Microsoft Conference Center in Wallisellen Zürich. To join that event register on the Microsoft Event Website. The event will be in German and will no be streamed to the web.

Agenda

Tuesday, December 03

08:30 – Coffee
09:00 – Session 1 – Hyper-V Overview (Virtual Machines, Hyper-V Manager, Virtual Switch, VHDX format)
10:30 – Coffee Break
10:45 – Session 2 – Hyper-V Advanced Features (Hyper-V Networking and Storage, Hyper-V over SMB, Network Virtualization)
12:00 – Lunch
13:00 – Session 3 – Management (VM and Fabric Management with System Center Virtual Machine Manager, PowerShell and more…)
14:30 – Coffee Break
14:45 – Session 4 – VMware Migration (Migration from VMware to Hyper-V, Tools, Best practices, automation, real world example)
16:15 – End

More Information and registration

More information and registration on the Microsoft Event Website.



ConnectX-3 Pro NVGRE Offloading RDMA

Hyper-V Network Virtualization: NVGRE Offloading

At the moment I spend a lot of time working with Hyper-V Network Virtualization in Hyper-V, System Center Virtual Machine Manager and with the new Network Virtualization Gateway. I am also creating some architecture design references for hosting providers which are going to use Hyper-V Network Virtualization and SMB as storage. If you are going for any kind of Network Virtualization (Hyper-V Network Virtualization or VXLAN) you want to make sure you can offload NVGRE traffic to the network adapter.

Well the great news here is that the Mellanox ConnectX-3 Pro not only offers RDMA (RoCE), which is used for SMB Direct, the adapter also offers hardware offloads for NVGRE and VXLAN encapsulated traffic. This is great and should improve the performance of Network Virtualization dramatically.

NVGRE Offloading

More information on the Mellanox ConnectX-3 Pro:

ConnectX-3 Pro 10/40/56GbE adapter cards with hardware offload engines to Overlay Networks (“Tunneling”), provide the highest performing and most flexible interconnect solution for PCI Express Gen3 servers used in public and private clouds, enterprise data centers, and high performance computing.

Public and private cloud clustered databases, parallel processing, transactional services, and high-performance embedded I/O applications will achieve significant performance improvements resulting in reduced completion time and lower cost per operation.

Mellanox ConnectX-3 Pro

Benefits:

  • 10/40/56Gb/s connectivity for servers and storage
  • World-class cluster, network, and storage performance
  • Cutting edge performance in virtualized overlay networks (VXLAN and NVGRE)
  • Guaranteed bandwidth and low-latency services
  • I/O consolidation
  • Virtualization acceleration
  • Power efficient
  • Scales to tens-of-thousands of nodes

Key features:

  • 1us MPI ping latency
  • Up to 40/56GbE per port
  • Single- and Dual-Port options available
  • PCI Express 3.0 (up to 8GT/s)
  • CPU offload of transport operations
  • Application offload
  • Precision Clock Synchronization
  • HW Offloads for NVGRE and VXLAN encapsulated traffic
  • End-to-end QoS and congestion control
  • Hardware-based I/O virtualization
  • RoHS-R6

Virtualized Overlay Networks — Infrastructure as a Service (IaaS) cloud demands that data centers host and serve multiple tenants, each with their own isolated network domain over a shared network infrastructure. To achieve maximum efficiency, data center operators are creating overlay networks that carry traffic from individual Virtual Machines (VMs) in encapsulated formats such as NVGRE and VXLAN over a logical “tunnel,” thereby decoupling the workload’s location from its network address. Overlay Network architecture introduces an additional layer of packet processing at the hypervisor level, adding and removing protocol headers for the encapsulated traffic. The new encapsulation prevents many of the traditional “offloading” capabilities (e.g. checksum, TSO) from being performed at the NIC. ConnectX-3 Pro effectively addresses the increasing demand for an overlay network, enabling superior performance by introducing advanced NVGRE and VXLAN hardware offload engines that enable the traditional offloads to be performed on the encapsulated traffic. With ConnectX-3 Pro, data center operators can decouple the overlay network layer from the physical NIC performance, thus achieving native performance in the new network architecture.

I/O Virtualization — ConnectX-3 Pro SR-IOV technology provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VMs) within the server. I/O virtualization with ConnectX-3 Pro gives data center managers better server utilization while reducing cost, power, and cable complexity.

RDMA over Converged Ethernet — ConnectX-3 Pro utilizing IBTA RoCE technology delivers similar low-latency and high- performance over Ethernet networks. Leveraging Data Center Bridging capabilities, RoCE provides efficient low latency RDMA services over Layer 2 Ethernet. With link-level interoperability in existing Ethernet infrastructure, Network Administrators can leverage existing data center fabric management solutions.

Sockets Acceleration — Applications utilizing TCP/UDP/IP transport can achieve industry- leading throughput over 10/40/56GbE. The hardware-based stateless offload engines in ConnectX-3 Pro reduce the CPU overhead of IP packet transport. Socket acceleration software further increases performance for latency sensitive applications.

Storage Acceleration — A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks. Standard block and file access protocols can leverage Ethernet or RDMA for high-performance storage access.

Software Support – All Mellanox adapter cards are supported by Windows, Linux distributions, VMware, FreeBSD, Ubuntu, and Citrix XenServer. ConnectX-3 Pro adapters support OpenFabrics-based RDMA protocols and software and are compatible with configuration and management tools from OEMs and operating system vendors.

 



SMB Scale-Out File Server

Hyper-V over SMB: Scale-Out File Server and Storage Spaces

On some community pages my blog post started some discussions why you should use SMB 3.0 and why you should use Windows Server as a storage solution. Let me be clear here, you don’t need Windows Server as a storage to make use of the Hyper-V over SMB 3.0 scenario, you can use storage form vendors like NetApp or EMC as well. But in my opinion you can get a huge benefit by using Windows Server in different scenarios.

  • First you can use Windows Server together with Storage Spaces, which will offer you a really great enterprise and scalable storage solution for low cost.
  • Second you can use Windows Server to mask your existing Storage, by building a layer between the Hyper-V hosts and your storage. So you easily extend your storage even with other vendors.

At the moment there are not a lot of vendors out there which offer SMB 3.0 in there storage solution. EMC was one of the first supporting SMB 3.0 and with ONTAP 8.2 Netapp is now supporting SMB 3.0 as well. But if you want to build a SMB layer for a storage which does not support SMB 3.0. to mask your storage so you can mix it with different vendors or using it with Windows Server 2012 Storage Spaces, the solution would be the Scale-Out File Server cluster. Microsoft offers file server cluster for a while now, but since this was an active/passive cluster, this was not really a great solution of a Hyper-V storage environment (even if a lot of small iSCSI storage boxes are active/passive as well).

Basically what the Scale-Out File Server let you do it so cluster up to 10 file servers which all will share CSVs (Cluster Shared Volumes) like you know from Hyper-V hosts and present SMB shares which are created on the CSV volumes. And the great thing about that, every node can offers the same share this will be a active/active solution up to 8 nodes. Together with SMB Transparent Failover the Hyper-V host does not really get any storage downtime if on of the SOFS nodes fails.

SMB Scale-Out File Server

For the storage guys out there think about the cluster nodes as your storage controllers. Most of the time you will have 2 controllers for fail-over and a little bit of manual load balancing where one LUN is offered by controller 1 and the other LUN is offered by controller 2. With the Scale-Out File Server you don’t really have that problem since the SMB share is offered on all hosts at the same time and up to 8 “controllers”. With Windows Server 2012 one Hyper-V host connected to one of the SOFS nodes and used multiple paths to this node by using SMB Multichannel, the other Hyper-V host connected automatically to the second SOFS node so both nodes are active at the same time. In case on of the SOFS nodes dies, the Hyper-V host fails over to the other SOFS node without any downtime for the Hyper-V Virtual Machines.

In Windows Server 2012 R2, Microsoft worked really hard to make this scenario even better. In Windows Server 2012 R2 a Hyper-V host can be connected to multiple SOFS node at the same time. Which means that VM1 and VM2 running on the same Hyper-V hosts can be offered by two different SOFS nodes.

Advantages of the Scale-Out File Server

  • Mask your storage and use different vendors
  • Scale up to 8 nodes (controllers)
  • Active/Active configuration
  • Transparent Failover
  • Supporting features like SMB Multichannel and SMB Direct
  • Easy entry point with SMB shares
  • Easy configuration, Hyper-V host and Cluster objects need access on the shares
  • Same Windows Server Failover Cluster Technology with the same management tools

Storage Spaces

As already mentioned you can use your already existing storage appliance as storage for your Scale-Out File Server CSVs or you could use Windows Server Storage Spaces which allow you to build great storage solution for a lot less money. Again, the Scale-Out File Server Cluster and Windows Server Storage Spaces are two separate things you don’t need a SOFS cluster for Storage Spaces and you don’t need Storage Spaces for a SOFS cluster, but of course both solutions work absolutely great together.

Windows Server Storage Spaces vs Traditional Storage

Microsoft first released there Software Defined Storage solution called Storage Spaces in Windows Server 2012 and this allows you basically to build your own storage solution based on a simple JBOD hardware solution. Storage spaces is a really cost-effective storage solution which allows companies to save up to 75% of storage costs in compare to traditional SAN storage.  It allows you to pool disks connected via SAS  (in Windows 8 and Windows 8.1 USB works as well for home users) and create different Virtual Disks (not VHDs) on these Storage Pools. The Virtual Disks, also called Storage Spaces, can have different resiliency levels like Simple, Mirror or Parity and you can also create multiple disks on one storage pool and even use thing provisioning. This sounds a lot like a traditional storage appliance right? True, this is not something totally different, this is something storage vendors do for a long time. But of course you pay a lot of money for this blackbox the storage vendors offer you. With Windows Server Storage Spaces Microsoft allows you to build our “own storage” on commodity hardware which will save you a lot of money.

Storage Space

This is not only just an “usable solution” this solution comes with some high-end storage features, which make the Storage Spaces and Windows File Server a perfect storage at low cost.

  • Windows Server Storage Spaces let you use cheap hardware
  • Offers you different types of resiliency, like Simple (Stripe), Mirror or Parity (also 3-way Mirror and Parity)
  • Offers you thin-provisioning
  • Windows Server File Server allows you to share the Storage via SMB, iSCSI or NFS.
  • Read-Cache – Windows Server CSV Cache offers you Memory based Read-Cache (up to 80% in Windows Server 2012 R2)
  • Continuous availability – Storage Pools and Disks can be clustered with the Microsoft Failover Cluster so if one server goes down the virtual disks and file shares are still available.
  • SMB copy offload – Offloading copy actions to the storage.
  • Snapshots – Create Snapshots and  clone virtual disks on a storage pool.
  • Flexible resiliency options – In Windows Server 2012 you could create a Mirror Spaces with a two-way or three-way mirror, a Parity Space with a single parity and a Simple Space with no data resiliency. New in R2 parity spaces can now be used in clustered pools and there is also a new dual parity option. (enhanced in 2012 R2)
  • Enhanced Rebuilding – Speed of rebuilding of failed disks is enhanced. (enhanced in 2012 R2)
  • Storage Tiering – Windows Server 2012 R2 allows you to use different kind of disks and automatically moves “hot-data” from SAS disks to fast SSD storage. (new in 2012 R2)
  • Write-Back Cache – This feature allows data to be written to SSD first and moves later to the slower SAS tier. (new in 2012 R2)
  • Data Deduplication – Data Deduplication was already included in Windows Server 2012 but it is enhanced in Windows Server 2012 R2, and allows you to use it together with Cluster Shared Volumes (CSV) and supports VDI virtual machines. (enhanced in 2012 R2)

You can get more information about Storage Spaces in Windows Server 2012 R2 in my blog post: What’s new in Windows Server 2012 R2 Storage Spaces

Combine Windows Server Storage Spaces and the Scale-Out File Server Cluster

As mentioned both of this techologies do not require each other, but if you combine them you get a really great solution. You can build your own storage based on Windows Server, which not only allows you to share storage via SMB 3,0 it also allows you to share storage via NFS or iSCSI.

Windows Server 2012 Storage Spaces and File Server

A lot of concerns I have heard, was about scale of Storage Spaces. But as I can see scale is absolutely no problem for Windows Server Storage Spaces.  First of all you can build up to 8 nodes in a single cluster which basically would mean you create a 8 node active/active solution. With SMB Multichannel you can use multiple NICs for example 10GbE, infiniband, or even faster network adapters. You can also make use of RDMA which brings latency down to a minimum.

Scale Windows Server Storage SpacesTo scale this even bigger you can go to way, you could setup a new Scale-Out File Server Cluster and create new file shares where virtual machines can be placed. Or you could extend the existing cluster with more servers and more shared SAS disks chassis which don’t have to be connected to the existing servers. This is possible because of  features like CSV Redirected mode hosts can access disks from other hosts even if they are not connected directly via SAS, instead the node is using the Ethernet connection between the hosts.

Scale Windows Server Storage Spaces 2

New features and enhancements in Windows Server 2012 R2 and System Center 2012 R2

With the 2012 R2 releases of Windows Server and System Center Microsoft made some great enhancements to Storage Spaces, Scale-Out File Server, SMB, Hyper-V and System Center. So if you have the chance to work with R2 make sure you check the following:

  • Flexible resiliency options – In Windows Server 2012 you could create a Mirror Spaces with a two-way or three-way mirror, a Parity Space with a single parity and a Simple Space with no data resiliency. New in R2 parity spaces can now be used in clustered pools and there is also a new dual parity option. (enhanced in 2012 R2)
  • Enhanced Rebuilding – Speed of rebuilding of failed disks is enhanced. (enhanced in 2012 R2)
  • Storage Tiering – Windows Server 2012 R2 allows you to use different kind of disks and automatically moves “hot-data” from SAS disks to fast SSD storage. (new in 2012 R2)
  • Write-Back Cache – This feature allows data to be written to SSD first and moves later to the slower SAS tier. (new in 2012 R2)
  • Data Deduplication – Data Deduplication was already included in Windows Server 2012 but it is enhanced in Windows Server 2012 R2, and allows you to use it together with Cluster Shared Volumes (CSV) and supports VDI virtual machines. (enhanced in 2012 R2)
  • Read-Cache – Windows Server CSV Cache offers you Memory based Read-Cache (up to 80% in Windows Server 2012 R2)
  • Management – Management of Hyper-V and Scale-Out File Servers as well as Storage Spaces right in System Center 2012 R2 Virtual Machine Manager.
  • Deployment – Deploy new Scale-Out File Server Clusters with and without Storage Spaces directly from System Center 2012 R2 Virtual Machine Manager via Bare-Metal Deployment.
  • Rebalancing of Scale-Out File Server clients – SMB client connections are tracked per file share (instead of per server), and clients are then redirected to the cluster node with the best access to the volume used by the file share. This improves efficiency by reducing redirection traffic between file server nodes.
  • Improved performance of SMB Direct (SMB over RDMA) – Improves performance for small I/O workloads by increasing efficiency when hosting workloads with small I/Os.
  • SMB event messages -SMB events now contain more detailed and helpful information. This makes troubleshooting easier and reduces the need to capture network traces or enable more detailed diagnostic event logging.
  • Shared VHDX files – Simplifies the creation of guest clusters by using shared VHDX files for shared storage inside the virtual machines.. This also masks the storage for customers if you are a service provider.
  • Hyper-V Live Migration over SMB – Enables you to perform a live migration of virtual machines by using SMB 3.0 as a transport. This allows you to take advantage of key SMB features, such as SMB Direct and SMB Multichannel, by providing high speed migration with low CPU utilization.
  • SMB bandwidth management – Enables you to configure SMB bandwidth limits to control different SMB traffic types. There are three SMB traffic types: default, live migration, and virtual machine.
  • Multiple SMB instances on a Scale-Out File Server – Provides an additional instance on each cluster node in Scale-Out File Servers specifically for CSV traffic. A default instance can handle incoming traffic from SMB clients that are accessing regular file shares, while another instance only handles inter-node CSV traffic.

(Source: TechNet: What’s New for SMB in Windows Server 2012 R2)

I hope I could help with this blog post to understand a little bit more about the Scale-Out File Server and Storage Spaces, and how you can create a great storage solution for your cloud Environment.

Btw the pictures and information are taken from people like Bryan Matthew (Microsoft), Jose Barreto (Microsoft) and Jeff Woolsey (Microsoft).