Last updated by at .

  • Microsoft Azure
  • Virtual Machine Manager

Tag: Windows Server

SCVMM Bare-Metal Fails

Add drivers to SCVMM Bare-Metal WinPE Image

A long time ago I wrote a blog post on how you can use System Center Virtual Machine Manager Bare-Metal Deployment to deploy new Hyper-V hosts. Normally this works fine but if you have newer hardware, your Windows Server Image does may not include the network adapter drivers. Now this isn’t a huge problem since you can mount and insert the drivers in the VHD or VHDX file for the Windows Server Hyper-V image. But if you forget to update the WinPE file from Virtual Machine Manager your deployment will fails, since the WinPE image has not network drivers included it won’t able to connect to the VMM Library or any other server.

You will end up in the following error and your deployment will timeout on the following screen:

“Synchronizing Time with Server”

SCVMM Bare-Metal Fails

If you check the IP configuration with ipconfig you will see that there are no network adapters available. This means you have to update your SCVMM WinPE image.

First of all you have to copy the SCVMM WinPE image. You can find this wim file on your WDS (Windows Deployment) PXE Server in the following location E:\RemoteInstall\DCMgr\Boot\WIndows\Images (Probably your setup has another drive letter.

WDS SCVMM Boot WIM

I copied this file to the C:\temp folder on my System Center Virtual Machine Manager server. I also copied the extracted drivers to the C:\Drivers folder.

After you have done this, you can use Greg Casanza’s (Microsoft) SCVMM Windows PE driver injection script, which will add the drivers to the WinPE Image (Boot.wim) and will publish this new boot.wim to all your WDS servers. I also rewrote the script I got from using drivers in the VMM Library to use drivers from a folder.

Update SCVMM WinPE

This will add the drivers to the Boot.wim file and publish it to the WDS servers.

Update WDS Server

After this is done the Boot.wim will work with your new drivers.

 

 

 

 

 



RemoteFX

GPU Requirements for RemoteFX on Windows Server 2012 R2

If your are planning a VDI (Virtual Desktop Infrastructure) deployment with Windows Server 2012 R2 Hyper-V  and you want to use physical graphics power with RemoteFX for your VDI machines fore example for CAD applications, you might wonder which cards are recommended and supported. Back in November 2013 Derrick Isoka (Microsoft Program Manager) wrote a blog post about recommendations and here is a quick summary.

RemoteFX GPU Requirements

To make use of RemoteFX with GPU acceleration on Windows Server 2012 R2 you require a compatible graphic card.

Most likely, the servers hosting the RemoteFX workloads will be located in a datacenter and as such, we recommend using passively cooled, server class graphics cards. However, it’s also acceptable to use a workstation card for testing on small deployments depending on your needs.

However the minimum requirements for the graphics cards to be used with Hyper-V RemoteFX are:

  • Direct 11.0 or later
  • WDDM 1.2 driver or later

DirectX and WDDM

There is some other point to this, in Windows Server 2012 R2 provides support for DirectX 11.0, DirectCompute and C++ AMP. Most of the graphics cards do support OpenGL 4.0 and OpenCL 1.1 or later, however these APIs are currently unsupported by RemoteFX in Windows Server 2012 R2.

Hardware and Driver Support

To find a graphics card also make sure you check the Windows Server Catalog.

RemoteFX Compatible GPUs

Microsoft did some tests and showed some of the results on the Remote Desktop Services blog.

RemoteFX Cards

  1. Best: These are server class cards, designed and certified for VDI workloads by hardware vendors like NVIDIA and AMD. They target the best application performance, experience, and virtual machine densities. Some of the cards are particularly recommended for designer and engineering workloads (such as Autodesk Inventor or AutoCad).
  2. Better: These are workstation class cards that provide acceptable performance and densities. They are especially capable cards for knowledge worker workloads (such as Microsoft Office or Internet Explorer).
  3. Good: These are lower-end cards that provide acceptable densities knowledge worker workloads.

Source: Microsoft

Performance and Scale

This is important, Microsoft also points out that GPU speed and memory, the performance and scale of your VDI deployment also depends on additional factors such as CPU, Storage and Network performance.

 



Microsoft System Center Logo

Technical Documentation for Getting Started with System Center 2012 R2

Marcel van den Berg just posted a blog post about the availability of the Technical Documentation for Getting Started with System Center 2012 R2 which was just released by Microsoft. The Technical Documentation covers the Support Matrix and Upgrade Sequence for System Center 2012 R2.

System Center 2012 R2 Requirements

You can download the documents on the Microsoft Download website.



Cisco Microsoft

Cisco and Microsoft Announce Sales and Go-to-Market Agreement

At the Worldwide Partner Conference 2014 Cisco and Microsoft announced a multi-year sales and go-to-market agreement designed to modernize data centers through the delivery and acceleration of integrated solutions. This will focus on bringing a deeper integration between the datacenter technologies of both companies. This includes Cisco UCS and Nexus products as well as Microsoft’s CloudOS solutions based Windows Server, Hyper-V, System Center, SQL Server and Microsoft Azure.

Highlights:

Go-to-Market:

  • Cisco and Microsoft agree to a three-year go-to-market plan focused on transforming data centers through the delivery of integrated solutions for enterprise customers and service providers.
  • In year one, the companies will focus on six countries — the United States, Canada, UK, Germany, France, and Australia — with expansion to additional countries in the following years.
  • Cisco and Microsoft will align partner incentive programs to accelerate solutions selling via mutual channel partners.
  • Cisco and Microsoft sales teams will work together on cloud and data center opportunities, including an initial program focused on the migration of Windows 2003 customers to Windows 2012 R2 on the Cisco UCS platform.

Integrated Solutions:

  • Integrated solutions will focus on private cloud, server migration, service provider, and SQL Server 2014
  • Cisco technologies to include Cisco UCS, Cisco Nexus switching, Cisco UCS Manager with System Center integration modules, and Cisco PowerTool.
  • Cisco-based integrated infrastructure solutions will include FlexPod with NetApp and Cisco Solutions for EMC VXPEX.
  • Microsoft technology includes Windows Server 2012 R2, System Center 2012 R2, PowerShell, Microsoft Azure and SQL Server 2014
  • Cisco Application Centric Infrastructure and Cisco InterCloud Fabric to be integrated in the solutions in future releases

Source and More information: www.streetinsider.com

As you may know I am a Microsoft MVP and a Cisco Champion and I really like doing project with Cisco Hardware since they do a lot of integration with the Microsoft Stack especially System Center and PowerShell. In my opinion this could be a strong partnership and will make life of a lot of people a lot easier.



StorSimple

Microsoft Announces Azure StorSimple Hybrid Storage Solutions For The Enterprise

Today Microsoft announced that starting August 1, the will deliver the new StorSimple 8000 series hybrid storage arrays. The StorSimple 8000 series are the most powerful StorSimple systems ever and have even tighter integration with Azure, including two new Azure-based capabilities to enable new use cases and centralize data management. These new solutions demonstrate how Microsoft is bringing the best of on-premises storage together with the cloud in order to deliver bottom line savings to customers by cutting storage costs from 40 to 60% and helping IT teams focus more on business strategies than infrastructure management.

The new StorSimple 8000 series arrays come in two flavors to meet a variety of capacity and performance needs:  the StorSimple 8100 and the StorSimple 8600, which you can read about here.

The new StorSimple 8000 series arrays come in two flavors to meet a variety of capacity and performance needs:  the StorSimple 8100 and the StorSimple 8600, which you can read about here.  These are enterprise hybrid storage arrays with a twist – instead of being limited to only SSDs and HDDs, these arrays use Azure Storage as a hybrid cloud tier for automatic capacity expansion and off-site data protection. That means IT teams don’t have to spend so much time and effort working on the next inevitable storage capacity upgrade or managing the complex details of data protection. Data stored on StorSimple 8000 series arrays is automatically protected off-site by cloud snapshots, which fill the enormous gap between problematic tape solutions and costly remote replication solutions.

To go with the new arrays, there is the Microsoft Azure StorSimple Virtual Appliance, which is an implementation of StorSimple technology running as an Azure virtual machine in the cloud. With a matching Azure StorSimple virtual machine, StorSimple 8000 series customers can run applications in Azure that access snapshot virtual volumes in the cloud. Customers will be able to run new applications that search and analyze historical datasets without disrupting production work in their datacenter. This new StorSimple Virtual Appliance not only works for data from Windows Server and Hyper-V, but on-premises Linux and VMware servers, as well, providing hybrid cloud capabilities for the most common server platforms today.

The Virtual Appliance also enables disaster recovery (DR) in the cloud. Virtualized applications that store their data on an Azure StorSimple array in a customer’s datacenter can be restarted in VMs in Azure with access to previously uploaded data. Updates to data made during recovery operations can be downloaded later to StorSimple arrays on-premises when normal operations resume.

DR is an area of concern for many customers and they seldom get a chance to test their abilities. Microsoft Azure StorSimple 8000 Series arrays and Virtual Appliances have a feature called Instant Recovery, which presents synthetic, full images of virtual volumes in Azure to applications and end users so they can start accessing data as soon as possible after a disaster. Instant recovery accelerates restores and DR testing by only downloading data that is needed and bypassing data that isn’t needed.

Another groundbreaking capability in this release is the Microsoft Azure StorSimple Manager, which consolidates management for all of a customer’s Azure StorSimple 8000 series arrays and Virtual Appliances. Administrators use the Manager to centrally control all aspects of StorSimple storage and data management from the cloud, so they can ensure consistent operations and data protection/retention policies across the enterprise. The new StorSimple Manager also gives administrators a dashboard with up-to-the-minute status and reports so they can quickly spot storage troubles and trends and allows the IT team to spend less time on storage infrastructure management and shift resources to business applications.

StorSimple customers have been seeing the financial and IT efficiency benefits of hybrid cloud storage for years.  Now, the Microsoft Azure StorSimple solution brings new innovations to enable even greater operational efficiency, and is a great example of technology developed with a hybrid cloud design point and critical customer needs in mind.

 

If you want to know more about StorSimple checkout my blog about StorSimple Cloud as a Tier and the Microsoft blog post from Takeshi Numoto about the new StorSimple 8000 series.

 



Windows Azure Pack Feedback

Feedback for Windows Azure Pack

Since the release of Windows Server 2012 R2 and System Center 2012 R2 I worked on several different Windows Azure Pack deployments for Service Providers. Windows Azure Pack delivers Microsoft Azure technologies for you to run inside your datacenter. It offers rich, self-service, multi-tenant services and experiences that are consistent with Microsoft’s public cloud offering. Together with technologies like Hyper-V Network Virtualization and Microsoft Storage Spaces Windows Azure Pack becomes a powerful framework for Service Providers.

You can help shape the future of Windows Azure Pack. The Windows Azure Pack team has created a user voice site where you can post feature suggestions and vote on the suggestions of others.

You can find the Azure Pack user voice site here http://feedback.azure.com/forums/255259-azure-pack

 

 

 



InovatiX

Back from inovatiX Amsterdam 2014

A little over 24 hours the itnetx team arrived at the Zurich airport. At the end of last week some of you may have seen a lot of tweets around Microsoft System Center with the hashtag #inovatiX. Well the name inovatiX comes from the company names of inovativ and itnetx. Both companies do focus on Microsoft Cloud solutions based on System Center, Windows Server, Hyper-V and Microsoft Azure. So what is behind that inovatiX event. InovatiX was the first run of the know-how sharing event between inovativ.nl, inovativ.be and itnetx.ch. In different focus groups around topics like Windows Azure Pack, Hyper-V, Config Manager, Windows InTune, Operations Manager, VMM or Microsoft Azure,  the cloud experts of those companies shared knowledge and experience with real world deployments.

InovatiX

For me personally I had some great talks about Windows Azure Pack, Hyper-V, VMM, Storage Spaces, Scale-Out File Servers, Network Virtualization and a lot more. And it was fun to finally meet the guys from inovativ in person.

InovatiX

This event was a perfect example how different companies can collaborate with each other to evolve and to make the quality even better, and help employees to Thanks here to the management of Inovativ and itnetx for organizing this.