Tag: SMB

Get-NetIPConfiguration

Basic Networking PowerShell cmdlets cheatsheet to replace netsh, ipconfig, nslookup and more

Around 4 years ago I wrote a blog post about how to Replace netsh with Windows PowerShell which includes basic powershell networking cmdlets. After working with Microsoft Azure, Nano Server and Containers, PowerShell together with networking becomes more and more important. I created this little cheat sheet so it becomes easy for people to get started.

Basic Networking PowerShell cmdlets

Get-NetIPConfiguration

Get the IP Configuration (ipconfig with PowerShell)

Get-NetIPConfiguration

List all Network Adapters

Get-NetAdapter

Get a spesific network adapter by name

Get-NetAdapter -Name *Ethernet*

Get more information VLAN ID, Speed, Connection status

Get-NetAdapter | ft Name, Status, Linkspeed, VlanID

Get driver information

Get-NetAdapter | ft Name, DriverName, DriverVersion, DriverInformation, DriverFileName

Get adapter hardware information. This can be really usefull when you need to know the PCI slot of the NIC.

Get-NetAdapterHardwareInfo

Disable and Enable a Network Adapter

Disable-NetAdapter -Name "Wireless Network Connection"
Enable-NetAdapter -Name "Wireless Network Connection"

Rename a Network Adapter

Rename-NetAdapter -Name "Wireless Network Connection" -NewName "Wireless"

IP Configuration using PowerShell

PowerShell Networking Get-NetIPAddress

Get IP and DNS address information

Get-NetAdapter -Name "Local Area Connection" | Get-NetIPAddress

Get IP address only

(Get-NetAdapter -Name "Local Area Connection" | Get-NetIPAddress).IPv4Address

Get DNS Server Address information

Get-NetAdapter -Name "Local Area Connection" | Get-DnsClientServerAddress

Set IP Address

New-NetIPAddress -InterfaceAlias "Wireless" -IPv4Address 10.0.1.95 -PrefixLength "24" -DefaultGateway 10.0.1.1

or if you want to change a existing IP Address

Set-NetIPAddress -InterfaceAlias "Wireless" -IPv4Address 192.168.12.25 -PrefixLength "24"

Remove IP Address

Get-NetAdapter -Name "Wireless" | Remove-NetIPAddress

Set DNS Server

Set-DnsClientServerAddress -InterfaceAlias "Wireless" -ServerAddresses "10.10.20.1","10.10.20.2"

Set interface to DHCP

Set-NetIPInterface -InterfaceAlias "Wireless" -Dhcp Enabled

Clear DNS Cache with PowerShell

You can also manage your DNS cache with PowerShell.

List DNS Cache:

 
Get-DnsClientCache

Clear DNS Cache

 
Clear-DnsClientCache

Ping with PowerShell

PowerShell Networking Test-NetConnection Ping

How to Ping with PowerShell. For a simple ping command with PowerShell, you can use the Test-Connection cmdlet:

 
Test-Connection thomasmaurer.ch

There is an advanced way to test connection using PowerShell

Test-NetConnection -ComputerName www.thomasmaurer.ch

Get some more details from the Test-NetConnection

Test-NetConnection -ComputerName www.thomasmaurer.ch -InformationLevel Detailed

Ping multiple IP using PowerShell

1..99 | % { Test-NetConnection -ComputerName x.x.x.$_ } | FT -AutoSize

Tracert

PowerShell Tracert

Tracert with PowerShell

Test-NetConnection www.thomasmaurer.ch –TraceRoute

Portscan with PowerShell

PowerShell Portscan

Use PowerShell to check for open port

Test-NetConnection -ComputerName www.thomasmaurer.ch -Port 80
Test-NetConnection -ComputerName www.thomasmaurer.ch -CommonTCPPort HTTP

NSlookup in PowerShell

PowerShell Networking NSlookup

NSlookup using PowerShell:

Resolve-DnsName www.thomasmaurer.ch
Resolve-DnsName www.thomasmaurer.ch -Type MX -Server 8.8.8.8

Route in PowerShell

PowerShell Networking Route

How to replace Route command with PowerShell

Get-NetRoute -Protocol Local -DestinationPrefix 192.168*
Get-NetRoute -InterfaceAlias Wi-Fi
 
New-NetRoute –DestinationPrefix "10.0.0.0/24" –InterfaceAlias "Ethernet" –NextHop 192.168.192.1

NETSTAT in PowerShell

PowerShell Networking Netstat

How to replace NETSTAT with PowerShell

Get-NetTCPConnection
Get-NetTCPConnection –State Established

NIC Teaming PowerShell commands

Create a new NIC Teaming (Network Adapter Team)

New-NetLbfoTeam -Name NICTEAM01 -TeamMembers Ethernet, Ethernet2 -TeamingMode SwitchIndependent -TeamNicName NICTEAM01 -LoadBalancingAlgorithm Dynamic

SMB Related PowerShell commands

SMB PowerShell SMB Client Configuration

Get SMB Client Configuration

Get-SmbClientConfiguration

Get SMB Connections

Get-SmbConnection

Get SMB Mutlichannel Connections

Get-SmbMutlichannelConnection

Get SMB open files

Get-SmbOpenFile

Get SMB Direct (RDMA) adapters

Get-NetAdapterRdma

Hyper-V Networking cmdlets

Hyper-V PowerShell Get-VMNetwork Adapter

Get and set Network Adapter VMQ settings

Get-NetAdapterVmq
# Disable VMQ
Set-NetAdapterVmq -Enabled $false
# Enable VMQ
Set-NetAdapterVmq -Enabled $true

Get VM Network Adapter

Get-VMNetworkAdapter -VMName Server01

Get VM Network Adapter IP Addresses

(Get-VMNetworkAdapter -VMName NanoConHost01).IPAddresses

Get VM Network Adapter Mac Addresses

(Get-VMNetworkAdapter -VMName NanoConHost01).MacAddress

I hope you enjoyed it and the post was helpful, if you think something important is missing, please add it in the comments.



Cisco UCS Hardware

Cisco UCS supports RoCE for Microsoft SMB Direct

As you may know we use SMB as the storage protocol for several Hyper-V deployments using Scale-Out File Server and Storage Spaces which adds a lot value to your Hyper-V deployments. To boost performance Microsoft is using RDMA or SMB Direct to accelerate Storage network performance.

RDMA over Converged Ethernet (RoCE) allows direct memory access over an Ethernet network. RoCE is a link layer protocol, and hence, it allows communication between any two hosts in the same Ethernet broadcast domain. RoCE delivers superior performance compared to traditional network socket implementations because of lower latency, lower CPU utilization and higher utilization of network bandwidth. Windows Server 2012 and later versions use RDMA for accelerating and improving the performance of SMB file sharing traffic and Live Migration. If you need to know more about RDMA or SMB Direct checkout my blog post: Hyper-V over SMB: SMB Direct

With Cisco UCS Manager Release 2.2(4), Cisco finally supports RoCE for SMB Direct. It sends additional configuration information to the adapter while creating or modifying an Ethernet adapter policy.

Guidelines and Limitations for SMB Direct with RoCE

  • SMB Direct with RoCE is supported only on Windows Server 2012 R2.
  • SMB Direct with RoCE is supported only with Cisco UCS VIC 1340 and 1380 adapters.
  • Cisco UCS Manager does not support more than 4 RoCE-enabled vNICs per adapter.
  • Cisco UCS Manager does not support RoCE with NVGRE, VXLAN, NetFlow, VMQ, or usNIC.
  • You can not use Windows Server NIC Teaming together with RMDA enabled adapters in Windows Server 2012 and Windows Server 2012 R2 or you will lose RDMA feature on these adapters.
  • Maximum number of queue pairs per adapter is 8192.
  • Maximum number of memory regions per adapter is 524288.
  • If you do not disable RoCE before downgrading Cisco UCS Manager from Release 2.2(4), downgrade will fail.

Checkout my post about Hyper-V over SMB:



Hyper-V Gernal Access dinied error

Hyper-V over SMB: Set SMB Constrained Delegation via PowerShell

When you are having configured a Hyper-V over SMB configuration, which means the virtual machines are running on Hyper-V host and are stored on a SMB file share, and you try to manage the virtual machine remotely from Hyper-V Manager or Failover Cluster Manager, you will run into access denied errors. The same error can also happen if you try live migrate the virtual machine. This error is caused because you are using the credentials from the machine which Hyper-V or Failover Cluster Manager is running on to access the file share via the Hyper-V host. This “double-hop” scenario is not by default not allowed because of security reasons. You can find more about Kerberos Authentication on TechNet.

To avoid this error you have to configure the SMB Constrained Delegation in Active Directory to allow this scenario for specific “double-hops”. In Windows Server 2012 Microsoft made setting up Kerberos constrained delegation much easier by introducing resource-based Kerberos Constrained Delegation. This it wasn’t that easy to deploy and required some step. In Windows Server 2012 R2 Microsoft introduced new Windows PowerShell cmdlets to configure SMB Constrained Delegation directly from PowerShell. These cmdlets are offered by the Active Directory PowerShell module.

On your management box or where ever you want to configure SMB Constrained Delegation you have to install the Active Directory PowerShell module. (You don’t need the module on the Hyper-V host or SMB file servers)

 
Install-WindowsFeature RSAT-AD-PowerShell

Now you can use the following cmdlets.

  • Get-SmbDelegation –SmbServer FileServer
  • Enable-SmbDelegation –SmbServer FileServer –SmbClient HyperVHost
  • Disable-SmbDelegation –SmbServer FileServer [–SmbClient HyperVHost] [-Force]

For example if you are running a two node Hyper-V cluster and you use a Scale-Out File Server cluster (SOFS01) as virtual machine storage, the configuration could look like this.

 
Enable-SmbDelegation –SmbServer SOFS01 –SmbClient HyperV01
 
Enable-SmbDelegation –SmbServer SOFS01 –SmbClient HyperV02

Because these cmdlets only work with the new resource-based delegation, the Active Directory forest must be in “Windows Server 2012” functional level. A functional level of Windows Server 2012 R2 is not required.

And as I mentioned before you can also use System Center Virtual Machine Manager (VMM) to manage your storage, which uses a different approach and does not need the configuration of Kerberos Constrained Delegation.

 



Windows Server 2012 R2 Private CLoud Storage and Virtualization

Windows Server 2012 R2 Private Cloud Virtualization and Storage Poster and Mini-Posters

Yesterday Microsoft released the Windows Server 2012 R2 Private Cloud Virtualization and Storage Poster and Mini-Posters. This includes overviews over Hyper-V, Failover Clustering, Scale-Out File Server, Storage Spaces and much more. These posters provide a visual reference for understanding key private cloud storage and virtualization technologies in Windows Server 2012 R2. They focus on understanding storage architecture, virtual hard disks, cluster shared volumes, scale-out file servers, storage spaces, data deduplication, Hyper-V, Failover Clustering, and virtual hard disk sharing.

Bedsides the overview poster, Microsoft Includes the following Mini-Posters:

  • Virtual Hard Disk and Cluster Shared Volumes Mini Poster
  • Virtual Hard Disk Sharing Mini Poster
  • Understanding Storage Architecture Mini Poster
  • Storage Spaces and Deduplication Mini Poster
  • Scale-Out and SMB Mini Poster
  • Hyper-V and Failover Clustering Mini Poster

You can get the posters from the Microsoft download page.



vmem-page-banner-memory-platform

Violin Memory Scale-out Memory Platform with SMB 3.0 Integration

If you are looking at Storage vendors for Hyper-V you really need to have a look at a storage solutions with SMB 3.0 integration. Because the Hyper-V over SMB scenario will be the future. So until some weeks ago you had 3 options, you could choose EMC VNX, NetApp or a Windows Server Scale-Out File server with or without storage spaces. I haven’t had the chance to test the EMC solution but on paper it looks nice, NetApp solutions lacks a lot of integration such as active-active configurations as well as lacking support for SMB Multichannel or SMB Direct (RDMA). A lot of customers also are looking at the Storage Spaces solutions with Scale-Out file Server which basically supports all the features you need but not offers the benefits an appliance solution brings with support.

Some weeks ago Violin Memory announced a solutions called the Scale-out Memory Platform which is built on their 6000-series. Until today Violin Memory Flash Memory Arrays provide power for performance, high availability, and scalability in enterprise block storage environments. Now these powerful arrays provide a new class of file based solutions with Microsoft Server 2012 R2 directly installed on the array. Microsoft and Violin Memory worked closely to develop this class of solution by bringing the power of memory to Microsoft applications such as SQL Server and Microsoft Hyper-V.

This would offer an appliance solution of the Hyper-V over SMB 3.0 scenario. At the moment there are not a lot of information out there but I will expect more information shortly and if you need more information checkout the Violin Memory page.



Add Windows-based File Server

Manage SOFS Cluster and File Shares from Virtual Machine Manager

In the past months I did several blog posts about Hyper-V over SMB and Storage Spaces. In small environment management of such a Scale-Out File Server Cluster can be a simple thing because you don’t have a lot of changes, you setup the thing once and this will work for some time. In larger enterprise fabric and storage management is a huge topic, now with Hyper-V over SMB you don’t have to do any zoning or configure iSCSI initiators but you still have to set the right permission on the file share. This is where System Center Virtual Machine Manager comes into play.

Virtual Machine Manager also you to not only manage your iSCSI or fiber channel storage appliances via SMI-S, you can also manage your Scale-Out File Server.

First you have to add the Scale-Out File Server to the SCVMM fabric management. You can simple add a resource and Add a Storage Device. This will open a wizard where you can not only select SAN or NAS storage, but you can also select Widows-based file server.

Add Windows-based File Server

Enter the FQDN of your Fileserver Cluster

Enter Fileserver FQDN

This will scan your File Server Cluster and will show you already existing file shares. You can now match Storage Classifications with the existing file shares.

File Server Fileshares and Classification

After you have connected your Scale-Out File Server you can now create new File Shares and Storage Spaces directly from the Virtual Machine Manager Console.

Create File Shares

After you have created the file share you now have to add the permission for the Hyper-V host to the File Share. Virtual Machine Manager does automatically take care of that if you add the File Share to the Hyper-V Host or if you have a Hyper-V Cluster to the Cluster Object.

Add File Share to Hyper-V host

Now you can start using the file shares for placing Virtual Machines on it. The File Shares classifications will also be available in the VM Clouds.

Cloud Storage Resouces

As you can see, System Center Virtual Machine Manager can make your life a lot easier and helps you manage your whole datacenter fabric, from Compute, network up to storage. In 2013 I did several presentations on Fabric Management with System Center Virtual Machine Manager and two of them are online. You should check out the following posts:

Fabric Management with System Center Virtual Machine Manager (German)

Fabric Management with System Center Virtual Machine Manager at the TechDays Basel (German)



EMC – SMB 3.0 is the Future of Storage

At the moment I am working in a lot of customer cloud deployment projects and the huge topic at the moment are networking and storage. In the networking part there is a lot of talk going on, on “small” things like NIC Teaming and also on bigger topics like Network Virtualization. On the storage part I think a lot of customers are ready to take new approach to save money and get a better solutions. The main parts I talk a lot about is Storage Spaces and Hyper-V over SMB. I already wrote a lot about Hyper-V over SMB, which is not only in my opinion the future of storage. EMC released a solution overview for their EMC VNX and VNXe solution which offer SMB 3.0. EMC calls SMB 3.0 “The Future of Storage”.

SMB 3.0 is the Future of Storage

SMB 3.0 in Windows 8 clients and Windows 2012 servers is the future of storage protocols. It gives excellent performance with low CPU overhead – plus fault tolerance. Its load balancing/scaling will adjust throughput to available NICs and it also supports simultaneous access by multiple cluster hosts, with build-in arbitration for data consistency. There’s also file-share VSS (RVSS) backup support that facilitates the capture of application-consistent backups on SMB shares. This resiliency, combined with increasing Ethernet speeds, open up the potential for demanding, mission critical workloads such as Hyper-V and Microsoft SQL Server, to be placed on NAS.

You can read more here: EMC VNX and VNXe with Microsoft SMB 3.0

As I already mentioned I deployed SMB 3.0 and Hyper-V over SMB a couple of times and for me this is absolutely the way to go: No Fiber channel, no more iSCSI. And it’s funny that EMC the owner of VMware is calling SMB 3.0 the Future of Storage. I have to admit the EMC VNX and VNXe solutions on paper look pretty great and it looks like EMC did a great job implementing SMB 3.0. Unfortunately I could not test and VNX or VNXe yet.

EMC SMB 3 the future of Storage

Btw make sure you read my other blog post on SMB 3.0:



SMB Bandwidth Limits

Hyper-V over SMB: SMB Bandwidth Limits

SMB is a huge topic since Windows Server 2012 and together with the concept of Converged Networking there was one very important features missing, and this was QoS (Quality of Service) for SMB traffic. With Windows Server 2012 R2 Microsoft addressed that and added a new feature called SMB Bandwidth Limits. SMB Bandwidth Limits allow you do separate different types of SMB traffic and limit them.

There are three different default types of SMB categories:

  • Default – Access to File Servers for library storage, for example the System Center Virtual Machine Manager to copy files to the Hyper-V server.
  • VirtualMachine – The traffic between the Hyper-V hosts and the storage of the Virtual Machines
  • LiveMigration – In Windows Server 2012 Live Migration can make use of SMB and so you can also set a limit to Live Migration traffic

SMB Bandwidth Limits

For example you could limit the Default SMB traffic and the Live Migration traffic and leave the Virtual Machine Storage traffic unlimited.

Enable SMB Bandwidth Limit

To set this up you have first to enable the feature over Server Manager or Windows PowerShell.

Server Manager:

Enable SMB Bandwidth Limit

PowerShell:

 
Install-WindowsFeature FS-SMBBW

Hot to configure SMB Bandwidth Limits

You can configure the SMB Bandwidth Limits via Windows Powershell

Configure SMB Bandwidth Limit via PowerShell

Set-SmbBandwidthLimit -Category LiveMigration -BytesPerSecond 125829120
 
Get-SmbBandwidthLimit
 
Remove-SmbBandwidthLimit -Category LiveMigration

Source: Graphic from Jose Barreto (Microsoft Corp.)

 



ConnectX-3 Pro NVGRE Offloading RDMA

Hyper-V Network Virtualization: NVGRE Offloading

At the moment I spend a lot of time working with Hyper-V Network Virtualization in Hyper-V, System Center Virtual Machine Manager and with the new Network Virtualization Gateway. I am also creating some architecture design references for hosting providers which are going to use Hyper-V Network Virtualization and SMB as storage. If you are going for any kind of Network Virtualization (Hyper-V Network Virtualization or VXLAN) you want to make sure you can offload NVGRE traffic to the network adapter.

Well the great news here is that the Mellanox ConnectX-3 Pro not only offers RDMA (RoCE), which is used for SMB Direct, the adapter also offers hardware offloads for NVGRE and VXLAN encapsulated traffic. This is great and should improve the performance of Network Virtualization dramatically.

NVGRE Offloading

More information on the Mellanox ConnectX-3 Pro:

ConnectX-3 Pro 10/40/56GbE adapter cards with hardware offload engines to Overlay Networks (“Tunneling”), provide the highest performing and most flexible interconnect solution for PCI Express Gen3 servers used in public and private clouds, enterprise data centers, and high performance computing.

Public and private cloud clustered databases, parallel processing, transactional services, and high-performance embedded I/O applications will achieve significant performance improvements resulting in reduced completion time and lower cost per operation.

Mellanox ConnectX-3 Pro

Benefits:

  • 10/40/56Gb/s connectivity for servers and storage
  • World-class cluster, network, and storage performance
  • Cutting edge performance in virtualized overlay networks (VXLAN and NVGRE)
  • Guaranteed bandwidth and low-latency services
  • I/O consolidation
  • Virtualization acceleration
  • Power efficient
  • Scales to tens-of-thousands of nodes

Key features:

  • 1us MPI ping latency
  • Up to 40/56GbE per port
  • Single- and Dual-Port options available
  • PCI Express 3.0 (up to 8GT/s)
  • CPU offload of transport operations
  • Application offload
  • Precision Clock Synchronization
  • HW Offloads for NVGRE and VXLAN encapsulated traffic
  • End-to-end QoS and congestion control
  • Hardware-based I/O virtualization
  • RoHS-R6

Virtualized Overlay Networks — Infrastructure as a Service (IaaS) cloud demands that data centers host and serve multiple tenants, each with their own isolated network domain over a shared network infrastructure. To achieve maximum efficiency, data center operators are creating overlay networks that carry traffic from individual Virtual Machines (VMs) in encapsulated formats such as NVGRE and VXLAN over a logical “tunnel,” thereby decoupling the workload’s location from its network address. Overlay Network architecture introduces an additional layer of packet processing at the hypervisor level, adding and removing protocol headers for the encapsulated traffic. The new encapsulation prevents many of the traditional “offloading” capabilities (e.g. checksum, TSO) from being performed at the NIC. ConnectX-3 Pro effectively addresses the increasing demand for an overlay network, enabling superior performance by introducing advanced NVGRE and VXLAN hardware offload engines that enable the traditional offloads to be performed on the encapsulated traffic. With ConnectX-3 Pro, data center operators can decouple the overlay network layer from the physical NIC performance, thus achieving native performance in the new network architecture.

I/O Virtualization — ConnectX-3 Pro SR-IOV technology provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VMs) within the server. I/O virtualization with ConnectX-3 Pro gives data center managers better server utilization while reducing cost, power, and cable complexity.

RDMA over Converged Ethernet — ConnectX-3 Pro utilizing IBTA RoCE technology delivers similar low-latency and high- performance over Ethernet networks. Leveraging Data Center Bridging capabilities, RoCE provides efficient low latency RDMA services over Layer 2 Ethernet. With link-level interoperability in existing Ethernet infrastructure, Network Administrators can leverage existing data center fabric management solutions.

Sockets Acceleration — Applications utilizing TCP/UDP/IP transport can achieve industry- leading throughput over 10/40/56GbE. The hardware-based stateless offload engines in ConnectX-3 Pro reduce the CPU overhead of IP packet transport. Socket acceleration software further increases performance for latency sensitive applications.

Storage Acceleration — A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks. Standard block and file access protocols can leverage Ethernet or RDMA for high-performance storage access.

Software Support – All Mellanox adapter cards are supported by Windows, Linux distributions, VMware, FreeBSD, Ubuntu, and Citrix XenServer. ConnectX-3 Pro adapters support OpenFabrics-based RDMA protocols and software and are compatible with configuration and management tools from OEMs and operating system vendors.

 



SMB Scale-Out File Server

Hyper-V over SMB: Scale-Out File Server and Storage Spaces

On some community pages my blog post started some discussions why you should use SMB 3.0 and why you should use Windows Server as a storage solution. Let me be clear here, you don’t need Windows Server as a storage to make use of the Hyper-V over SMB 3.0 scenario, you can use storage form vendors like NetApp or EMC as well. But in my opinion you can get a huge benefit by using Windows Server in different scenarios.

  • First you can use Windows Server together with Storage Spaces, which will offer you a really great enterprise and scalable storage solution for low cost.
  • Second you can use Windows Server to mask your existing Storage, by building a layer between the Hyper-V hosts and your storage. So you easily extend your storage even with other vendors.

At the moment there are not a lot of vendors out there which offer SMB 3.0 in there storage solution. EMC was one of the first supporting SMB 3.0 and with ONTAP 8.2 Netapp is now supporting SMB 3.0 as well. But if you want to build a SMB layer for a storage which does not support SMB 3.0. to mask your storage so you can mix it with different vendors or using it with Windows Server 2012 Storage Spaces, the solution would be the Scale-Out File Server cluster. Microsoft offers file server cluster for a while now, but since this was an active/passive cluster, this was not really a great solution of a Hyper-V storage environment (even if a lot of small iSCSI storage boxes are active/passive as well).

Basically what the Scale-Out File Server let you do it so cluster up to 10 file servers which all will share CSVs (Cluster Shared Volumes) like you know from Hyper-V hosts and present SMB shares which are created on the CSV volumes. And the great thing about that, every node can offers the same share this will be a active/active solution up to 8 nodes. Together with SMB Transparent Failover the Hyper-V host does not really get any storage downtime if on of the SOFS nodes fails.

SMB Scale-Out File Server

For the storage guys out there think about the cluster nodes as your storage controllers. Most of the time you will have 2 controllers for fail-over and a little bit of manual load balancing where one LUN is offered by controller 1 and the other LUN is offered by controller 2. With the Scale-Out File Server you don’t really have that problem since the SMB share is offered on all hosts at the same time and up to 8 “controllers”. With Windows Server 2012 one Hyper-V host connected to one of the SOFS nodes and used multiple paths to this node by using SMB Multichannel, the other Hyper-V host connected automatically to the second SOFS node so both nodes are active at the same time. In case on of the SOFS nodes dies, the Hyper-V host fails over to the other SOFS node without any downtime for the Hyper-V Virtual Machines.

In Windows Server 2012 R2, Microsoft worked really hard to make this scenario even better. In Windows Server 2012 R2 a Hyper-V host can be connected to multiple SOFS node at the same time. Which means that VM1 and VM2 running on the same Hyper-V hosts can be offered by two different SOFS nodes.

Advantages of the Scale-Out File Server

  • Mask your storage and use different vendors
  • Scale up to 8 nodes (controllers)
  • Active/Active configuration
  • Transparent Failover
  • Supporting features like SMB Multichannel and SMB Direct
  • Easy entry point with SMB shares
  • Easy configuration, Hyper-V host and Cluster objects need access on the shares
  • Same Windows Server Failover Cluster Technology with the same management tools

Storage Spaces

As already mentioned you can use your already existing storage appliance as storage for your Scale-Out File Server CSVs or you could use Windows Server Storage Spaces which allow you to build great storage solution for a lot less money. Again, the Scale-Out File Server Cluster and Windows Server Storage Spaces are two separate things you don’t need a SOFS cluster for Storage Spaces and you don’t need Storage Spaces for a SOFS cluster, but of course both solutions work absolutely great together.

Windows Server Storage Spaces vs Traditional Storage

Microsoft first released there Software Defined Storage solution called Storage Spaces in Windows Server 2012 and this allows you basically to build your own storage solution based on a simple JBOD hardware solution. Storage spaces is a really cost-effective storage solution which allows companies to save up to 75% of storage costs in compare to traditional SAN storage.  It allows you to pool disks connected via SAS  (in Windows 8 and Windows 8.1 USB works as well for home users) and create different Virtual Disks (not VHDs) on these Storage Pools. The Virtual Disks, also called Storage Spaces, can have different resiliency levels like Simple, Mirror or Parity and you can also create multiple disks on one storage pool and even use thing provisioning. This sounds a lot like a traditional storage appliance right? True, this is not something totally different, this is something storage vendors do for a long time. But of course you pay a lot of money for this blackbox the storage vendors offer you. With Windows Server Storage Spaces Microsoft allows you to build our “own storage” on commodity hardware which will save you a lot of money.

Storage Space

This is not only just an “usable solution” this solution comes with some high-end storage features, which make the Storage Spaces and Windows File Server a perfect storage at low cost.

  • Windows Server Storage Spaces let you use cheap hardware
  • Offers you different types of resiliency, like Simple (Stripe), Mirror or Parity (also 3-way Mirror and Parity)
  • Offers you thin-provisioning
  • Windows Server File Server allows you to share the Storage via SMB, iSCSI or NFS.
  • Read-Cache – Windows Server CSV Cache offers you Memory based Read-Cache (up to 80% in Windows Server 2012 R2)
  • Continuous availability – Storage Pools and Disks can be clustered with the Microsoft Failover Cluster so if one server goes down the virtual disks and file shares are still available.
  • SMB copy offload – Offloading copy actions to the storage.
  • Snapshots – Create Snapshots and  clone virtual disks on a storage pool.
  • Flexible resiliency options – In Windows Server 2012 you could create a Mirror Spaces with a two-way or three-way mirror, a Parity Space with a single parity and a Simple Space with no data resiliency. New in R2 parity spaces can now be used in clustered pools and there is also a new dual parity option. (enhanced in 2012 R2)
  • Enhanced Rebuilding – Speed of rebuilding of failed disks is enhanced. (enhanced in 2012 R2)
  • Storage Tiering – Windows Server 2012 R2 allows you to use different kind of disks and automatically moves “hot-data” from SAS disks to fast SSD storage. (new in 2012 R2)
  • Write-Back Cache – This feature allows data to be written to SSD first and moves later to the slower SAS tier. (new in 2012 R2)
  • Data Deduplication – Data Deduplication was already included in Windows Server 2012 but it is enhanced in Windows Server 2012 R2, and allows you to use it together with Cluster Shared Volumes (CSV) and supports VDI virtual machines. (enhanced in 2012 R2)

You can get more information about Storage Spaces in Windows Server 2012 R2 in my blog post: What’s new in Windows Server 2012 R2 Storage Spaces

Combine Windows Server Storage Spaces and the Scale-Out File Server Cluster

As mentioned both of this techologies do not require each other, but if you combine them you get a really great solution. You can build your own storage based on Windows Server, which not only allows you to share storage via SMB 3,0 it also allows you to share storage via NFS or iSCSI.

Windows Server 2012 Storage Spaces and File Server

A lot of concerns I have heard, was about scale of Storage Spaces. But as I can see scale is absolutely no problem for Windows Server Storage Spaces.  First of all you can build up to 8 nodes in a single cluster which basically would mean you create a 8 node active/active solution. With SMB Multichannel you can use multiple NICs for example 10GbE, infiniband, or even faster network adapters. You can also make use of RDMA which brings latency down to a minimum.

Scale Windows Server Storage SpacesTo scale this even bigger you can go to way, you could setup a new Scale-Out File Server Cluster and create new file shares where virtual machines can be placed. Or you could extend the existing cluster with more servers and more shared SAS disks chassis which don’t have to be connected to the existing servers. This is possible because of  features like CSV Redirected mode hosts can access disks from other hosts even if they are not connected directly via SAS, instead the node is using the Ethernet connection between the hosts.

Scale Windows Server Storage Spaces 2

New features and enhancements in Windows Server 2012 R2 and System Center 2012 R2

With the 2012 R2 releases of Windows Server and System Center Microsoft made some great enhancements to Storage Spaces, Scale-Out File Server, SMB, Hyper-V and System Center. So if you have the chance to work with R2 make sure you check the following:

  • Flexible resiliency options – In Windows Server 2012 you could create a Mirror Spaces with a two-way or three-way mirror, a Parity Space with a single parity and a Simple Space with no data resiliency. New in R2 parity spaces can now be used in clustered pools and there is also a new dual parity option. (enhanced in 2012 R2)
  • Enhanced Rebuilding – Speed of rebuilding of failed disks is enhanced. (enhanced in 2012 R2)
  • Storage Tiering – Windows Server 2012 R2 allows you to use different kind of disks and automatically moves “hot-data” from SAS disks to fast SSD storage. (new in 2012 R2)
  • Write-Back Cache – This feature allows data to be written to SSD first and moves later to the slower SAS tier. (new in 2012 R2)
  • Data Deduplication – Data Deduplication was already included in Windows Server 2012 but it is enhanced in Windows Server 2012 R2, and allows you to use it together with Cluster Shared Volumes (CSV) and supports VDI virtual machines. (enhanced in 2012 R2)
  • Read-Cache – Windows Server CSV Cache offers you Memory based Read-Cache (up to 80% in Windows Server 2012 R2)
  • Management – Management of Hyper-V and Scale-Out File Servers as well as Storage Spaces right in System Center 2012 R2 Virtual Machine Manager.
  • Deployment – Deploy new Scale-Out File Server Clusters with and without Storage Spaces directly from System Center 2012 R2 Virtual Machine Manager via Bare-Metal Deployment.
  • Rebalancing of Scale-Out File Server clients – SMB client connections are tracked per file share (instead of per server), and clients are then redirected to the cluster node with the best access to the volume used by the file share. This improves efficiency by reducing redirection traffic between file server nodes.
  • Improved performance of SMB Direct (SMB over RDMA) – Improves performance for small I/O workloads by increasing efficiency when hosting workloads with small I/Os.
  • SMB event messages -SMB events now contain more detailed and helpful information. This makes troubleshooting easier and reduces the need to capture network traces or enable more detailed diagnostic event logging.
  • Shared VHDX files – Simplifies the creation of guest clusters by using shared VHDX files for shared storage inside the virtual machines.. This also masks the storage for customers if you are a service provider.
  • Hyper-V Live Migration over SMB – Enables you to perform a live migration of virtual machines by using SMB 3.0 as a transport. This allows you to take advantage of key SMB features, such as SMB Direct and SMB Multichannel, by providing high speed migration with low CPU utilization.
  • SMB bandwidth management – Enables you to configure SMB bandwidth limits to control different SMB traffic types. There are three SMB traffic types: default, live migration, and virtual machine.
  • Multiple SMB instances on a Scale-Out File Server – Provides an additional instance on each cluster node in Scale-Out File Servers specifically for CSV traffic. A default instance can handle incoming traffic from SMB clients that are accessing regular file shares, while another instance only handles inter-node CSV traffic.

(Source: TechNet: What’s New for SMB in Windows Server 2012 R2)

I hope I could help with this blog post to understand a little bit more about the Scale-Out File Server and Storage Spaces, and how you can create a great storage solution for your cloud Environment.

Btw the pictures and information are taken from people like Bryan Matthew (Microsoft), Jose Barreto (Microsoft) and Jeff Woolsey (Microsoft).