Last updated by at .

  • Microsoft Azure
  • Virtual Machine Manager

Tag: System Center 2012

SCVMM Bare-Metal Fails

Add drivers to SCVMM Bare-Metal WinPE Image

A long time ago I wrote a blog post on how you can use System Center Virtual Machine Manager Bare-Metal Deployment to deploy new Hyper-V hosts. Normally this works fine but if you have newer hardware, your Windows Server Image does may not include the network adapter drivers. Now this isn’t a huge problem since you can mount and insert the drivers in the VHD or VHDX file for the Windows Server Hyper-V image. But if you forget to update the WinPE file from Virtual Machine Manager your deployment will fails, since the WinPE image has not network drivers included it won’t able to connect to the VMM Library or any other server.

You will end up in the following error and your deployment will timeout on the following screen:

“Synchronizing Time with Server”

SCVMM Bare-Metal Fails

If you check the IP configuration with ipconfig you will see that there are no network adapters available. This means you have to update your SCVMM WinPE image.

First of all you have to copy the SCVMM WinPE image. You can find this wim file on your WDS (Windows Deployment) PXE Server in the following location E:\RemoteInstall\DCMgr\Boot\WIndows\Images (Probably your setup has another drive letter.

WDS SCVMM Boot WIM

I copied this file to the C:\temp folder on my System Center Virtual Machine Manager server. I also copied the extracted drivers to the C:\Drivers folder.

After you have done this, you can use Greg Casanza’s (Microsoft) SCVMM Windows PE driver injection script, which will add the drivers to the WinPE Image (Boot.wim) and will publish this new boot.wim to all your WDS servers. I also rewrote the script I got from using drivers in the VMM Library to use drivers from a folder.

Update SCVMM WinPE

This will add the drivers to the Boot.wim file and publish it to the WDS servers.

Update WDS Server

After this is done the Boot.wim will work with your new drivers.

 

 

 

 

 



ConnectX-3 Pro NVGRE Offloading RDMA

Hyper-V Network Virtualization: NVGRE Offloading

At the moment I spend a lot of time working with Hyper-V Network Virtualization in Hyper-V, System Center Virtual Machine Manager and with the new Network Virtualization Gateway. I am also creating some architecture design references for hosting providers which are going to use Hyper-V Network Virtualization and SMB as storage. If you are going for any kind of Network Virtualization (Hyper-V Network Virtualization or VXLAN) you want to make sure you can offload NVGRE traffic to the network adapter.

Well the great news here is that the Mellanox ConnectX-3 Pro not only offers RDMA (RoCE), which is used for SMB Direct, the adapter also offers hardware offloads for NVGRE and VXLAN encapsulated traffic. This is great and should improve the performance of Network Virtualization dramatically.

NVGRE Offloading

More information on the Mellanox ConnectX-3 Pro:

ConnectX-3 Pro 10/40/56GbE adapter cards with hardware offload engines to Overlay Networks (“Tunneling”), provide the highest performing and most flexible interconnect solution for PCI Express Gen3 servers used in public and private clouds, enterprise data centers, and high performance computing.

Public and private cloud clustered databases, parallel processing, transactional services, and high-performance embedded I/O applications will achieve significant performance improvements resulting in reduced completion time and lower cost per operation.

Mellanox ConnectX-3 Pro

Benefits:

  • 10/40/56Gb/s connectivity for servers and storage
  • World-class cluster, network, and storage performance
  • Cutting edge performance in virtualized overlay networks (VXLAN and NVGRE)
  • Guaranteed bandwidth and low-latency services
  • I/O consolidation
  • Virtualization acceleration
  • Power efficient
  • Scales to tens-of-thousands of nodes

Key features:

  • 1us MPI ping latency
  • Up to 40/56GbE per port
  • Single- and Dual-Port options available
  • PCI Express 3.0 (up to 8GT/s)
  • CPU offload of transport operations
  • Application offload
  • Precision Clock Synchronization
  • HW Offloads for NVGRE and VXLAN encapsulated traffic
  • End-to-end QoS and congestion control
  • Hardware-based I/O virtualization
  • RoHS-R6

Virtualized Overlay Networks — Infrastructure as a Service (IaaS) cloud demands that data centers host and serve multiple tenants, each with their own isolated network domain over a shared network infrastructure. To achieve maximum efficiency, data center operators are creating overlay networks that carry traffic from individual Virtual Machines (VMs) in encapsulated formats such as NVGRE and VXLAN over a logical “tunnel,” thereby decoupling the workload’s location from its network address. Overlay Network architecture introduces an additional layer of packet processing at the hypervisor level, adding and removing protocol headers for the encapsulated traffic. The new encapsulation prevents many of the traditional “offloading” capabilities (e.g. checksum, TSO) from being performed at the NIC. ConnectX-3 Pro effectively addresses the increasing demand for an overlay network, enabling superior performance by introducing advanced NVGRE and VXLAN hardware offload engines that enable the traditional offloads to be performed on the encapsulated traffic. With ConnectX-3 Pro, data center operators can decouple the overlay network layer from the physical NIC performance, thus achieving native performance in the new network architecture.

I/O Virtualization — ConnectX-3 Pro SR-IOV technology provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VMs) within the server. I/O virtualization with ConnectX-3 Pro gives data center managers better server utilization while reducing cost, power, and cable complexity.

RDMA over Converged Ethernet — ConnectX-3 Pro utilizing IBTA RoCE technology delivers similar low-latency and high- performance over Ethernet networks. Leveraging Data Center Bridging capabilities, RoCE provides efficient low latency RDMA services over Layer 2 Ethernet. With link-level interoperability in existing Ethernet infrastructure, Network Administrators can leverage existing data center fabric management solutions.

Sockets Acceleration — Applications utilizing TCP/UDP/IP transport can achieve industry- leading throughput over 10/40/56GbE. The hardware-based stateless offload engines in ConnectX-3 Pro reduce the CPU overhead of IP packet transport. Socket acceleration software further increases performance for latency sensitive applications.

Storage Acceleration — A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks. Standard block and file access protocols can leverage Ethernet or RDMA for high-performance storage access.

Software Support – All Mellanox adapter cards are supported by Windows, Linux distributions, VMware, FreeBSD, Ubuntu, and Citrix XenServer. ConnectX-3 Pro adapters support OpenFabrics-based RDMA protocols and software and are compatible with configuration and management tools from OEMs and operating system vendors.

 



Export Templates from Virtual Machine Manager Settings

Export and Import Virtual Machine Manager Templates

If you are working with System Center Virtual Machine Manager and you want to export and import your existing VM or Service Templates. I have a customer scenario where we have two VMM installations. They are using System Center Virtual Machine Manager, Orchestrator, Serivce Manager to deploy new customer environments for their premium SaaS (Software as a Service) hosting solution where they deploy Lync, Exchange and SharePoint fully automated. Here we have a development environment where they test new System Center Orchestrator Runbooks and new Templates in Virtual Machine Manager. After they have a working RunBook with working Templates they export the templates from the dev VMM and import them in the production environment.
Because I was surprise how great this works and I think not a lot of people know about this feature, I created this short step-by-step guide.

Export Templates from Virtual Machine Manager

First select the Templates you want to export and click on the Export button on the Ribbon bar. You can also do a multiple select to export multiple templates.

Export Templates from Virtual Machine Manager

You can than configure the export, with a location, password.

Export Templates from Virtual Machine Manager Settings

 

You can also select what physical resources which should be exported with the template. For example if you are using the same VHD or VHDX for multiple templates you may want to export this resource only once to save some space.

Export Templates from Virtual Machine Manager physical resources

The export will look kind of like this. The XML files are the templates with the configurations, and in the folders are the physical resources like VHDs, XMLs or other stuff.

Exported Templates from Virtual Machine Manager

Import Templates in Virtual Machine Manager

To import a template just select the exported XML file.

Import Templates in Virtual Machine Manager

You can change or setup the resource of the template, for example you can select an already existing VHD from your Library or an already existing Run As account.

Import Templates in Virtual Machine Manager resources

And you can set the location for the new imported resources (VHDs,…)

Import Templates in Virtual Machine Manager resource location

I hope this shows you how easy an export and import of a Service or VM Template from System Center Virtual Machine Manager is. I like especially how SCVMM handles the additional resources, so you don’t have to import the same VHD every time and you can change Run As accounts very easily.

 

 



Cisco UCS Microsoft Solutions

Manage your Cisco UCS with Windows PowerShell

Cisco does a really great job on supporting different management software for their blade center. For example Cisco offers a System Center Virtual Machine Manager Add-in to manage your Cisco fabric directly from the SCVMM console, a System Center Orchestrator Integration Pack for automation and a System Center Operations Manager Management Pack for monitoring. But another great thing they offer is the PowerShell module for the Cisco UCS called Cisco UCS PowerTools, which allows you to manage and automate your Cisco Blade Center via Windows PowerShell. The Cisco PowerShell module offers round 1400 PowerShell cmdlets which allows you basically to do every task from the console.

To connect to your Cisco UCS system you can use the following cmdlet:

and you can use other cmdlets to manage your Blades, VLANs or Service Profiles.

 

You can get the Cisco UCS PowerTools from the Cisco Website.



Install SCVMM HA Cluster

How to Install a Highly Available SCVMM Management Server

Since System Center Virtual Machine Manager is starting to get more and more important and starts to be a critical application for your environment especially if your are using Hyper-V Network Virtualization and SCVMM is your centralized policy store, you should install Virtual Machine Manager highly available. To do this Virtual Machine Manager uses the Failover Cluster feature integrated in Windows Server.

Before you begin check this important nodes

  • Not only the SCVMM Management Server should be high available, also the SQL Server where the SCVMM database is installed and the file share for the library share should be highly available.
  • You can have two or more SCVMM Management server in a cluster, but only one node will be active.
  • You will need to configure Distributed Key Management. You use distributed key management to store encryption keys in Active Directory Domain Services (AD DS) instead of storing the encryption keys on the computer on which the VMM management server is installed.

Here the quick steps what you do:

  • Install two server for the SCVMM Management servers (Cluster nodes) with Windows Server 2012 or Windows Server 2012 R2.
  • Install all the SCVMM prerequisites (ADK and SQL native client)
  • Create a SCVMM Service Account which has local admin rights on the SCVMM nodes.
  • Create a container in Active Directory Domain Services for the Distributed Key Management.
  • Set all IP addresses, you may also configure an independent Heartbeat network
  • Install the Failover Cluster feature on both server.

After you have done this steps you start to create a Failover Cluster with both nodes.

Create SCVMM Cluster

You built the SCVMM Cluster, now you have to install the SCVMM Service on the first node. You can start the SCVMM Installer and will automatically detected the SCVMM Cluster and will ask you if you want to install the SCVMM server as high available installation.

Install SCVMM HA Cluster

The installation is now more less the same as for a standalone Virtual Machine Manager Server, expect you have to use Distributed Key Management and one screen where you configure the SCVMM Cluster Role with a name and an IP address.

Install SCVMM HA Cluster Configuration

After you have installed the first node you can now run the setup on the second node. The setup does also see the cluster and the SCVMM Cluster role, and will ask you about the configuration. Many of the settings cannot be changed because they are the same on all nodes.

Add Node to SCVMM Cluster

After you have installed both nodes you can see the SCVMM Cluster Role in the Failover Cluster Manager.

VMM Failover Cluster Role

And you can of course also see all your Virtual Machine Management servers in the Virtual Machine Manager console.

VMM Console

I hope this helps you to install System center 2012 SP1 or System Center 2012 R2 Virtual Machine Manager as Highly Available installation. If you need more information check out TechNet: How to Install a Highly Available VMM Management Server



Cisco UCS Hardware

Automate your Cisco UCS with System Center Orchestrator

Some days ago I posted an article how you can manage your Cisco UCS Blade Center directly from System Center Virtual machine Manager. Cisco also offers an Integration Pack for System Center Orchestrator which allows you to automate your Cisco UCS via Orchestrator Integration Packs, which is great if you are building your own Private Cloud based on Cisco hardware.

First step you have to download the Cisco UCS PowerTool (PowerShell Module) and the Cisco UCS Microsoft System Center Orchestrator Integration Pack.

After you have installed the Cisco UCS PowerTool on your System Center Orchestrator Runbook servers you now an import the Integration Pack via the System Center Orchestrator Deployment Manager. With a right click on Integration Packs you can Register the Cisco UCS IP.

Cisco UCS Integration Pack Orchestator Deployment Manager

After that you also have to deploy the IP to the Orchestrator Runbook servers.

Cisco UCS Integration Pack Orchestator Deployment Manager Deploy

You can start to create new Orchestrator Run Books with the Runbook Designer. First open the SCO Runbook Designer and in the Options menu select Cisco UCS to added the Path to the Cisco UCS PowerTool module (PowerShell module). The default path the Cisco UCS PowerTools are installed is: “C:\Program Files (x86)\Cisco\Cisco UCS PowerTool\Modules\CiscoUcsPS\CiscoUcsPS.psd1″

Cisco UCS Integration Pack Orchestator PowerTool Path

You can now start to automate your Cisco UCS with System Center Orchestrator.

If you are interested in how you monitor your Cisco UCS system with System Center Operations Manager Stefan Roth blogged about that.



System Center Virtual Machine Manager Bare-Metal Deployment

System Center Virtual Machine Manager Mania at E2EVC

As you may know, back in Mai this year I was in Copenhagen speaking at the Experts 2 Experts Virtualization Conference, short E2EVC. Together with Michael Rüefli I talked in the SCVMM Mania session about the latest and greatest features in System Center Virtual Machine Manager like the new Networking features like the Logical Switch, Bare-Metal Deployment of Hyper-V hosts, Services and Server App-V and also the VMware integration inside SCVMM.

Since this week you can watch the session right on youtube.

Btw. my next speaking event will be at the System Center Universe Europe in Bern Switzerland. Together with other Microsoft MVPs and Consultants will I do an advanced session on System Center 2012 R2 – Virtual Machine Manager Networking and an overview session on Windows Server 2012 R2 Hyper-V. So if you want to see my sessions or the other great sessions about the Microsoft Cloud offering, System Center, Windows Server and Windows Azure make sure you register for the event on systemcenteruniverse.ch.