Tag: Virtualization

Last updated by at .

E2EVC Copenhagen

Speaking at E2EVC 2015 Berlin

Last year I was speaking at the Experts 2 Experts Virtualization Conference (E2EVC) at E2EVC Barcelona and E2EVC Brussels. And I am proud to announce that I will speak at E2EVC 2015 in Berlin next week from 12-14 June. Together with Michael Ruefli (Microsoft MVP for Cloud and Datacenter Management) I will speak about the latest announcement from the Microsoft Cloud and Datacenter evolution, covering topics like Azure Stack, Nano Server, Windows Server 2016, System Center 2016, Hyper-V and Microsoft Azure.

E2EVC Virtualization Conference is a non-commercial, virtualization community event. The main goal of the E2EVC is to bring the best virtualization experts together to exchange knowledge and to establish new connections. E2EVC is a weekend crammed with presentations, Master Classes and discussions delivered by both virtualization vendors product teams and independent experts. I am happy to be part of the community and listen to other industry leading experts, hopefully see you in Berlin.

 

 



Cisco UCS Hardware

Cisco UCS supports RoCE for Microsoft SMB Direct

As you may know we use SMB as the storage protocol for several Hyper-V deployments using Scale-Out File Server and Storage Spaces which adds a lot value to your Hyper-V deployments. To boost performance Microsoft is using RDMA or SMB Direct to accelerate Storage network performance.

RDMA over Converged Ethernet (RoCE) allows direct memory access over an Ethernet network. RoCE is a link layer protocol, and hence, it allows communication between any two hosts in the same Ethernet broadcast domain. RoCE delivers superior performance compared to traditional network socket implementations because of lower latency, lower CPU utilization and higher utilization of network bandwidth. Windows Server 2012 and later versions use RDMA for accelerating and improving the performance of SMB file sharing traffic and Live Migration. If you need to know more about RDMA or SMB Direct checkout my blog post: Hyper-V over SMB: SMB Direct

With Cisco UCS Manager Release 2.2(4), Cisco finally supports RoCE for SMB Direct. It sends additional configuration information to the adapter while creating or modifying an Ethernet adapter policy.

Guidelines and Limitations for SMB Direct with RoCE

  • SMB Direct with RoCE is supported only on Windows Server 2012 R2.
  • SMB Direct with RoCE is supported only with Cisco UCS VIC 1340 and 1380 adapters.
  • Cisco UCS Manager does not support more than 4 RoCE-enabled vNICs per adapter.
  • Cisco UCS Manager does not support RoCE with NVGRE, VXLAN, NetFlow, VMQ, or usNIC.
  • You can not use Windows Server NIC Teaming together with RMDA enabled adapters in Windows Server 2012 and Windows Server 2012 R2 or you will lose RDMA feature on these adapters.
  • Maximum number of queue pairs per adapter is 8192.
  • Maximum number of memory regions per adapter is 524288.
  • If you do not disable RoCE before downgrading Cisco UCS Manager from Release 2.2(4), downgrade will fail.

Checkout my post about Hyper-V over SMB:



Cisco UCS C200 M2 with Windows Server 2008 R2 and Windows Server 8 #HyperV

Cisco UCS and Hyper-V Enable Stateless Offloads with NVGRE

As I already mentioned I did several Hyper-V and Microsoft Windows Server projects with Cisco UCS. With Cisco UCS you can now configure stateless offloads for NVGRE traffic which is needed for Hyper-V Network Virtualization.

Cisco UCS Manager supports stateless offloads with NVGRE only with Cisco UCS VIC 1340 and/or Cisco UCS VIC 1380 adapters that are installed on servers running Windows Server 2012 R2 operating systems.

To use this you have to create Ethernet Adapter Policy, and set the Configuring an Ethernet Adapter Policy to Enable Stateless Offloads with NVGREin the Resources area:

  • Transmit Queues = 1
  • Receive Queues = n (up to 8)
  • Completion Queues = # of Transmit Queues + # of Receive Queues
  • Interrupts = # Completion Queues + 2

And in the Option area set the following settings:

  • Network Virtualization using Generic Routing Encapsulation = Enabled
  • Interrupt Mode = Msi-X

Make also sure you have installed eNIC driver Version 3.0.0.8 or later.

For more information, see http:/​/​www.cisco.com/​c/​en/​us/​td/​docs/​unified_computing/​ucs/​sw/​vic_drivers/​install/​Windows/​b_​Cisco_​VIC_​Drivers_​for_​Windows_​Installation_​Guide.html.



Hyper-V

Hyper-V vNext is going to support nested Virtualization

I already wrote a blog post about some of the new features which are coming with the next version of Hyper-V. This week was the Microsoft Build Conference where Microsoft was talking a lot about new stuff for developers and the future of Windows, Azure, Office and so on. Now today I found a very interesting email in my inbox from Ronald Beekelaar (Microsoft MVP for Hyper-V) and he had the chance to visit a session at Build Conference where Taylor Brown and Mathew John were talking about Windows Containers: What, Why and How. In this session there was a quick side note, that Windows Server vNext Hyper-V will support nested Virtualization.

Until today a Hyper-V server could only run Virtual Machines when he was running on physical hardware. This was no problem in production, but when you wanted to do some demos or training you needed a lot of hardware to show what is possible with Hyper-V. Now with nested Virtualization you can run Hyper-V inside a virtual machine and build for example a demo and lab environment on your notebook, creating Hyper-V Clusters and so on.

As for some of you this might be not a big deal, this is a huge deal for everyone who did demos or training on Hyper-V.

 

You can watch the session here on Microsoft Channel9.



NIC Teaming

Overview on Windows Server and Hyper-V 2012 R2 NIC Teaming and SMB Multichannel

I know this is nothing new but since I had to mention the Whitepaper on NIC Teaming and the use of SMB Multichannel as well as the configuration with System Center Virtual Machine Manager in a couple of meetings I want to make sure you have an overview on my blog.

NIC Teaming

Windows Server NIC Teaming was introduced in Windows Server 2012 (Codename Windows Server 8). NIC teaming, also known as Load Balancing/Failover (LBFO), allows multiple network adapters to be placed into a team for the purposes of bandwidth aggregation, and/or traffic failover to maintain connectivity in the event of a network component failure.

NIC Teaming Recommendation

For design the default and recommended configuration is using NIC Teaming with Switch Independent and Dynamic and in some scenarios where you have the write switches you can use LACP and Dynamic.

Download Windows Server 2012 R2 NIC Teaming (LBFO) Deployment and Management Whitepaper

This guide describes how to deploy and manage NIC Teaming with Windows Server 2012 R2.

You can find the Whitepaper on Windows Server 2012 R2 NIC Teaming (LBFO) Deployment and Management in the Microsoft Download Center.

SMB Multichannel

Hyper-V over SMB Multichannel

If you use Hyper-V over SMB you can use SMB Multichannel as a even better mode to distribute SMB 3.0 traffic across different network adapters or you could use a mix of both, NIC Teaming and SMB Multichannel. Check out my blog post about Hyper-V over SMB: SMB Multichannel, SMB Direct (RDMA) and Scale-Out File Server and Storage Spaces.

Configuration with System Center Virtual Machine Manager

Logical Switch

Some months back I also wrote some blog post about configuration of Hyper-V Converged Networking and System Center Virtual Machine Manager. This guide will help you to understand how you deploy NIC Teaming with System Center Virtual Machine Manager using the Logical Switch on Hyper-V hosts.



Green Cloud Azure Pack

Green Cloud based on Windows Server Hyper-V and Windows Azure Pack

If you try to host some IaaS workloads or build a Hybrid Cloud environment connected to a service provider in Switzerland, you probably want to check out the Green Hyper-V ServerCloud.

Based on Hyper-V technology from Windows Server 2012 R2, Green virtual servers provide you with a powerful, high-availability server platform for your applications. The virtual servers can be seamlessly integrated into your existing IT environment, using Site-2-Site VPN.

Green also offers a own image container function in Windows Azure Pack which allows you to quickly and smoothly migrate your server to the Hyper-V ServerCloud, including configuration and software. Install your VHDX and ISO images and save valuable time on reinstallation and setup.

Options and the ability to gradually expand the system pave the way for future expansion. From individual applications to virtualization of entire IT areas, Server Cloud offers enough scope for your business.

Green Server Cloud

Some of the cool stuff Green offers in there Cloud Solution:

  • Cloud based on Windows Server 2012 R2 Hyper-V and Windows Azure Pack
  • Powerful packages on virtual server with up to 16 CPU cores and 128GB RAM
  • Windows Server 2008 R2 and Windows Server 2012 R2 Images
  • Linux Images (CentOS and more…)
  • Bring your own Server and ISO Images
  • Create VM Checkpoint (Snapshots) right from the Tenant Portal
  • Seamless expansion of local infrastructure through network virtualization and free-of-charge site-to-site VPN
  • Local service and support in three local languages
  • High Security standards implemented in the Green Datacenter
  • Server Location in Switzerland
  • Hyper-V Replica support – Replicated your Hyper-V Virtual Machines to the Green Cloud for DR scenarios
  • 30 days free trial

Green Business Connectivity and Security

Green Cloud Datacenters

Green is using it own datacenter to host the Green Cloud. The GreenCloud is hosted in their Tier 4 and Tier 3 datacenters for maximal security. The newest green.ch data center offers all the benefits of a state-of-the-art data center. It is situated in an excellent location, is the only Swiss data center that was awarded a Tier 4 design certification, and was designed for energy-efficient operation.

The Lupfig site is located west of Zurich in an easy to access location. It is far away from hazardous zones, yet centrally located within the Zurich-Basel-Bern business triangle.

From the very beginning, greenDatacenter Zurich West was designed for highest availability. All systems required for operation are duplicated. Multiple feeds are used for the power and emergency power supply, and the connection to the data network. And these feeds are even separately routed within the data center. Four security perimeters protect the data center against unauthorized access. Security measures include biometric access systems.

The Swiss Federal Office of Energy awarded greenDatacenter Zurich West the Watt d’Or 2013 for exemplary energy efficiency in the buildings and space category.

Green Cloud Technology

Green Cloud Image Container

As already mentioned Green is using the Microsoft Cloud Platform stack with Windows Azure Pack and Windows Server 2012 R2 Hyper-V for their Cloud offering. By using Hyper-V Network Virtualization and Site-2-Site VPN, customers can easily connect their local networks to the Green Cloud and build a Hybrid Cloud scenario. Green also extended their offering beyond the standard WAP offerings by adding additional features such as Hyper-V Replica support, the option to create Checkpoints (Snapshots) of Virtual Machines and the possibility to bring your own server images and ISO images to the Green Cloud.

Green Cloud Checkpoints

So if you are interested in the things Green offers checkout the 30 days free trial offering.

 



VMware ESXi 6.0 Enable SSH Service

Enable SSH on VMware ESXi 6.0 via vSphere Client

In another blog post I wrote how you can enable SSH on a VMware ESXi 6.0 host directly on the host it self. In this blog post I show you how you can enable SSH on your VMware ESXi 6.0 host via the VMware vSphere Client.

Open the VMware vSphere client and connected to your ESXi server and open the Configurations tab.

VMware ESXi 6.0 Configuration

On the Configurations tab choose Security Profile.

VMware ESXi 6.0 Security Profile

Open the Properties tab so you cen see the Security Profile properties and the Remote Access services. Here you can enable the SSH Server on the VMware ESXi host.

VMware ESXi 6.0 Enable SSH Service

If you have some issues check the firewall settings on your VMware ESXi host.

VMware ESXi 6.0 Firewall SSH Port

If you want to enable SSH on directly on your VMware ESXi host check out the following post: Enable SSH on VMware ESXi 6.0