Tag: SCVMM

Last updated by at .

Scale Windows Server Storage Spaces

System Center Operations Manager Management Pack for Windows Server Storage Spaces

Microsoft just released the System Center Operations Manager Management Pack for Windows Server Storage Spaces 2012 R2 to the public. This allows you to monitor your Storage Spaces deployments with Operations Manager.

You can download the Management Pack for Storage Spaces from the Microsoft Download Site.

Monitoring Scenarios

This Management Pack contains rules to monitor physical disk and enclosure state in storage spaces.
Health is calculated by the storage service and is passed to Virtual Machine Manager (VMM) using the Storage Management API (SM-API), and is in turn passed to Operations Manager (OM) through the OM connector for VMM.

Supported Configurations

This management pack requires System Center Operations Manager 2012 SP1 or later. A dedicated Operations Manager management group is not required.

The following table details the supported configurations for the Management Pack for Storage Spaces:

Configuration Support
Virtual Machine Manager 2012 R2 with Update Rollup 4 or later installed
Windows Server File Servers 2012 R2 with KB 3000850 (November 2014 update rollup) or later
Clustered servers Yes

Management Pack Scope

This management pack supports up to:

  • 16 Storage Nodes
  • 12 Storage Pools
  • 120 File Shares

Prerequisites

The following requirements must be met to run this management pack:

  • Operations Manager Connector for Virtual Machine Manager installed and configured.
    https://technet.microsoft.com/en-us/library/hh427287.aspx
  • Configuring this connection will install the required VMM Management Packs.
  • Storage Spaces managed by Virtual Machine Manager
  • KB2913766 “Hotfix improves storage enclosure management for Storage Spaces” must be installed on the VMM server and file server nodes


VMM 2012 R2 Update Rollup 6 Azure IaaS Management

Generation 2 Virtual Machine in Service Templates and Managing Azure IaaS VMs in VMM with UR6

Microsoft just announced System Center 2012 R2 Virtual Machine Manager Update Rollup 6 with some highly requested features. Two of them are support for VMM Service Templates with Generation 2 Virtual Machines and managing Microsoft Azure IaaS Virtual Machines directly from the Virtual Machine Manager Console.

If you want to know more checkout that video:



System Center Logo

Update Rollup 4 for System Center 2012 R2 and Azure Pack now available

Microsoft today released Update Rollup 4 for System Center 2012 R2. Update Rollup 4 (UR4) fix some issues and new features in the System Center components as well as Windows Azure Pack.

Components that are fixed in this update rollup

Especially Virtual Machines Manager includes a lot of fixes and DPM also brings beside fixes some interesting new features such as support for SQL Server 2014. In Azure Pack Microsoft added now the possibility to use Azure Site Recovery Manager into plans for DC Failovers via Hyper-V Replica.

 



System Center Logo

SCVMM 2012 R2 Error 23317 When You Try to Apply Changes on VM That is Using Shared VHDX Disk

A customer of mine had a issue when he tried to change properties of Virtual Machines in System Center Virtual Machine Manager 2012 R2 which use shared VHDX, which were not created with VMM. The properties do he wanted to change had nothing to do with the Shared VHDX it self. He tried to set the availability set for these Virtual Machines.

The Error in SCVMM is the following:

Error (23317)
The operation Change properties of virtual machine is not permitted on a virtual machine that has shared virtual hard disks.

Recommended Action
The operation Change properties of virtual machine is not permitted on a virtual machine that has shared virtual hard disks.

Stanislav Zhelyazkov (Microsoft MVP) blogged about this in October 2013. The solution is pretty easy and is called PowerShell. Just do the modification but do not apply it. Use the script view in Virtual Machine Manager to get the code which would run behind the scene.

For example:

Remove all the things you don’t need and run the script:



SCVMM Bare-Metal Fails

Add drivers to SCVMM Bare-Metal WinPE Image

A long time ago I wrote a blog post on how you can use System Center Virtual Machine Manager Bare-Metal Deployment to deploy new Hyper-V hosts. Normally this works fine but if you have newer hardware, your Windows Server Image does may not include the network adapter drivers. Now this isn’t a huge problem since you can mount and insert the drivers in the VHD or VHDX file for the Windows Server Hyper-V image. But if you forget to update the WinPE file from Virtual Machine Manager your deployment will fails, since the WinPE image has not network drivers included it won’t able to connect to the VMM Library or any other server.

You will end up in the following error and your deployment will timeout on the following screen:

“Synchronizing Time with Server”

SCVMM Bare-Metal Fails

If you check the IP configuration with ipconfig you will see that there are no network adapters available. This means you have to update your SCVMM WinPE image.

First of all you have to copy the SCVMM WinPE image. You can find this wim file on your WDS (Windows Deployment) PXE Server in the following location E:\RemoteInstall\DCMgr\Boot\WIndows\Images (Probably your setup has another drive letter.

WDS SCVMM Boot WIM

I copied this file to the C:\temp folder on my System Center Virtual Machine Manager server. I also copied the extracted drivers to the C:\Drivers folder.

After you have done this, you can use Greg Casanza’s (Microsoft) SCVMM Windows PE driver injection script, which will add the drivers to the WinPE Image (Boot.wim) and will publish this new boot.wim to all your WDS servers. I also rewrote the script I got from using drivers in the VMM Library to use drivers from a folder.

Update SCVMM WinPE

This will add the drivers to the Boot.wim file and publish it to the WDS servers.

Update WDS Server

After this is done the Boot.wim will work with your new drivers.

 

 

 

 

 



Savision Whitepaper

Whitepaper and Webinar “VMM (fabric) Management and Resource Pooling”

One of the most challenging things during the shift to Cloud Computing is to manage Fabric Resources efficiently. Together with Savision I have worked on a whitepaper in which it is outlined how Fabric resource like Compute, Storage and Network can be managed efficiently and how System Center Virtual Machine Manager provides a solution to build a datacenter abstraction layer.

The whitepaper is focused on building a datacenter abstraction layer of your fabric resources, self-service and service deployment. If you would like to know more about it, join our webinar.

Register for EU webinar

Register here for the EU webinar on Webinar “VMM (fabric) Management and Resource Pooling” by MVP Thomas Maurer – 7th May 3.00pm CEST (English)

Register for US webinar

Register here for the US webinar on Webinar “VMM (fabric) Management and Resource Pooling” by MVP Thomas Maurer – 8th May 2.00 pm EDT (English)

We’re looking forward having you logged in!

Btw you can download the Whitepaper right here:

Download the Whitepaper

Download the Free Whitepaper on “VMM (fabric) Management and Resource Pooling” by MVP Thomas Maurer

If you have questions join the webinar or feel free to comment.



PowerShell NetAdpater Advanced Property

Hyper-V Network Virtualization NVGRE: No connection between VMs on different Hyper-V Hosts

I have worked on some project with Hyper-V Network Virtualization and NVGRE, and today I have seen an issue with Encapsulated Task Offloading on some HP Broadcom Network adapters.

 

Issue

I have Hyper-V Hosts running with 10GbE Broadcom Network Adapters (HP Ethernet 10Gb 2-port 530FLR-SFP+ Adapter) with driver version 7.8.52.0 (released in 2014). I have created a new VM Network based on Hyper-V Network Virtualization using NVGRE. VM1 is running on Host1 and VM2 is running on Host2. You can ping VM2 from VM1 but there is no other connection possible like SMB, RDP, HTTP or DNS. If you are using a NVGRE Gateway you can no even resolve DNS inside those VMs. If VM1 and VM2 are running on the same Hyper-V host everything between those VMs works fine.

Advanced Driver Settings

If you are using Server Core, which you should by the way, you can use the following command to check for those settings:

PowerShell NetAdpater Advanced Property

 

Resolution

The Broadcom Network adapters have a feature called Encapsulated Task Offloading which is enabled by default. If you disable Encapsulated Task Offloading everything works fine. You can disable it by using the following PowerShell cmdlet.

After that connection inside the VMs started to work immediately, no reboot needed.