Tag: Network Adapter

Get-NetIPConfiguration

Basic Networking PowerShell cmdlets cheatsheet to replace netsh, ipconfig, nslookup and more

Around 4 years ago I wrote a blog post about how to Replace netsh with Windows PowerShell which includes basic powershell networking cmdlets. After working with Microsoft Azure, Nano Server and Containers, PowerShell together with networking becomes more and more important. I created this little cheat sheet so it becomes easy for people to get started.

Basic Networking PowerShell cmdlets

Get-NetIPConfiguration

Get the IP Configuration (ipconfig with PowerShell)

List all Network Adapters

Get a spesific network adapter by name

Get more information VLAN ID, Speed, Connection status

Get driver information

Get adapter hardware information. This can be really usefull when you need to know the PCI slot of the NIC.

Disable and Enable a Network Adapter

Rename a Network Adapter

IP Configuration using PowerShell

PowerShell Networking Get-NetIPAddress

Get IP and DNS address information

Get IP address only

Get DNS Server Address information

Set IP Address

or if you want to change a existing IP Address

Remove IP Address

Set DNS Server

Set interface to DHCP

Clear DNS Cache with PowerShell

You can also manage your DNS cache with PowerShell.

List DNS Cache:

Clear DNS Cache

Ping with PowerShell

PowerShell Networking Test-NetConnection Ping

How to Ping with PowerShell. For a simple ping command with PowerShell, you can use the Test-Connection cmdlet:

There is an advanced way to test connection using PowerShell

Get some more details from the Test-NetConnection

Ping multiple IP using PowerShell

Tracert

PowerShell Tracert

Tracert with PowerShell

Portscan with PowerShell

PowerShell Portscan

Use PowerShell to check for open port

NSlookup in PowerShell

PowerShell Networking NSlookup

NSlookup using PowerShell:

Route in PowerShell

PowerShell Networking Route

How to replace Route command with PowerShell

NETSTAT in PowerShell

PowerShell Networking Netstat

How to replace NETSTAT with PowerShell

NIC Teaming PowerShell commands

Create a new NIC Teaming (Network Adapter Team)

SMB Related PowerShell commands

SMB PowerShell SMB Client Configuration

Get SMB Client Configuration

Get SMB Connections

Get SMB Mutlichannel Connections

Get SMB open files

Get SMB Direct (RDMA) adapters

Hyper-V Networking cmdlets

Hyper-V PowerShell Get-VMNetwork Adapter

Get and set Network Adapter VMQ settings

Get VM Network Adapter

Get VM Network Adapter IP Addresses

Get VM Network Adapter Mac Addresses

I hope you enjoyed it and the post was helpful, if you think something important is missing, please add it in the comments.



Cisco UCS C200 M2 with Windows Server 2008 R2 and Windows Server 8 #HyperV

Cisco UCS and Hyper-V Enable Stateless Offloads with NVGRE

As I already mentioned I did several Hyper-V and Microsoft Windows Server projects with Cisco UCS. With Cisco UCS you can now configure stateless offloads for NVGRE traffic which is needed for Hyper-V Network Virtualization.

Cisco UCS Manager supports stateless offloads with NVGRE only with Cisco UCS VIC 1340 and/or Cisco UCS VIC 1380 adapters that are installed on servers running Windows Server 2012 R2 operating systems.

To use this you have to create Ethernet Adapter Policy, and set the Configuring an Ethernet Adapter Policy to Enable Stateless Offloads with NVGREin the Resources area:

  • Transmit Queues = 1
  • Receive Queues = n (up to 8)
  • Completion Queues = # of Transmit Queues + # of Receive Queues
  • Interrupts = # Completion Queues + 2

And in the Option area set the following settings:

  • Network Virtualization using Generic Routing Encapsulation = Enabled
  • Interrupt Mode = Msi-X

Make also sure you have installed eNIC driver Version 3.0.0.8 or later.

For more information, see http:/​/​www.cisco.com/​c/​en/​us/​td/​docs/​unified_computing/​ucs/​sw/​vic_drivers/​install/​Windows/​b_​Cisco_​VIC_​Drivers_​for_​Windows_​Installation_​Guide.html.



Sort Network Adapter via PowerShell

Sort Windows Network Adapter by PCI Slot via PowerShell

If you work with Windows, Windows Server or Hyper-V you know that before Windows Server 2012 Windows named the network adapters randomly. This was a huge deal if you were trying to automate deployment of servers with multiple network adapters. And of course Hyper-V Servers normally have multiple network adapters. In Windows Server 2012 Microsoft had some different ways how this was fixed. First there is CDN (Consistent Device Naming) which allows hardware vendors to integrate the names so the OS can pick them up and the second one being the possibility of Hyper-V Converged Fabric which is basically making our lives easier by having less network adapters.

Well a lot of vendors have not integrated CDN or you have some old servers without CDN support. Back in May 2012 before the release of Windows Server 2012 I wrote a little Windows PowerShell script to sort network adapters in Windows Server 2008 R2 and Hyper-V Server 2008 R2 by using WMI (Configure Hyper-V Host Network Adapters Like A Boss). Now for a Cisco UCS project I rewrote some parts of the script to use Windows PowerShell in for Windows Server 2012, Windows Server 2012 R2 and Hyper-V.

First lets have a look how you can get the PCI slot information for network adapters, luckily there is now a PowerShell cmdlet for this.

Now lets see how you can sort network adapters via Windows PowerShell.

This will get you a output like this:

Sort Network Adapter via PowerShell

Lets do a little loop to automatically name them:

So this names all the network adapters to NIC1, NIC2, NIC3,…

So lets do a PowerShell function for this:

Now you can run this by using Sort-NetworkAdapter for exmaple:

or

You can also get this script from the Microsoft Technet Gallery or Script Center.



ConnectX-3 Pro NVGRE Offloading RDMA

Hyper-V Network Virtualization: NVGRE Offloading

At the moment I spend a lot of time working with Hyper-V Network Virtualization in Hyper-V, System Center Virtual Machine Manager and with the new Network Virtualization Gateway. I am also creating some architecture design references for hosting providers which are going to use Hyper-V Network Virtualization and SMB as storage. If you are going for any kind of Network Virtualization (Hyper-V Network Virtualization or VXLAN) you want to make sure you can offload NVGRE traffic to the network adapter.

Well the great news here is that the Mellanox ConnectX-3 Pro not only offers RDMA (RoCE), which is used for SMB Direct, the adapter also offers hardware offloads for NVGRE and VXLAN encapsulated traffic. This is great and should improve the performance of Network Virtualization dramatically.

NVGRE Offloading

More information on the Mellanox ConnectX-3 Pro:

ConnectX-3 Pro 10/40/56GbE adapter cards with hardware offload engines to Overlay Networks (“Tunneling”), provide the highest performing and most flexible interconnect solution for PCI Express Gen3 servers used in public and private clouds, enterprise data centers, and high performance computing.

Public and private cloud clustered databases, parallel processing, transactional services, and high-performance embedded I/O applications will achieve significant performance improvements resulting in reduced completion time and lower cost per operation.

Mellanox ConnectX-3 Pro

Benefits:

  • 10/40/56Gb/s connectivity for servers and storage
  • World-class cluster, network, and storage performance
  • Cutting edge performance in virtualized overlay networks (VXLAN and NVGRE)
  • Guaranteed bandwidth and low-latency services
  • I/O consolidation
  • Virtualization acceleration
  • Power efficient
  • Scales to tens-of-thousands of nodes

Key features:

  • 1us MPI ping latency
  • Up to 40/56GbE per port
  • Single- and Dual-Port options available
  • PCI Express 3.0 (up to 8GT/s)
  • CPU offload of transport operations
  • Application offload
  • Precision Clock Synchronization
  • HW Offloads for NVGRE and VXLAN encapsulated traffic
  • End-to-end QoS and congestion control
  • Hardware-based I/O virtualization
  • RoHS-R6

Virtualized Overlay Networks — Infrastructure as a Service (IaaS) cloud demands that data centers host and serve multiple tenants, each with their own isolated network domain over a shared network infrastructure. To achieve maximum efficiency, data center operators are creating overlay networks that carry traffic from individual Virtual Machines (VMs) in encapsulated formats such as NVGRE and VXLAN over a logical “tunnel,” thereby decoupling the workload’s location from its network address. Overlay Network architecture introduces an additional layer of packet processing at the hypervisor level, adding and removing protocol headers for the encapsulated traffic. The new encapsulation prevents many of the traditional “offloading” capabilities (e.g. checksum, TSO) from being performed at the NIC. ConnectX-3 Pro effectively addresses the increasing demand for an overlay network, enabling superior performance by introducing advanced NVGRE and VXLAN hardware offload engines that enable the traditional offloads to be performed on the encapsulated traffic. With ConnectX-3 Pro, data center operators can decouple the overlay network layer from the physical NIC performance, thus achieving native performance in the new network architecture.

I/O Virtualization — ConnectX-3 Pro SR-IOV technology provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VMs) within the server. I/O virtualization with ConnectX-3 Pro gives data center managers better server utilization while reducing cost, power, and cable complexity.

RDMA over Converged Ethernet — ConnectX-3 Pro utilizing IBTA RoCE technology delivers similar low-latency and high- performance over Ethernet networks. Leveraging Data Center Bridging capabilities, RoCE provides efficient low latency RDMA services over Layer 2 Ethernet. With link-level interoperability in existing Ethernet infrastructure, Network Administrators can leverage existing data center fabric management solutions.

Sockets Acceleration — Applications utilizing TCP/UDP/IP transport can achieve industry- leading throughput over 10/40/56GbE. The hardware-based stateless offload engines in ConnectX-3 Pro reduce the CPU overhead of IP packet transport. Socket acceleration software further increases performance for latency sensitive applications.

Storage Acceleration — A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks. Standard block and file access protocols can leverage Ethernet or RDMA for high-performance storage access.

Software Support – All Mellanox adapter cards are supported by Windows, Linux distributions, VMware, FreeBSD, Ubuntu, and Citrix XenServer. ConnectX-3 Pro adapters support OpenFabrics-based RDMA protocols and software and are compatible with configuration and management tools from OEMs and operating system vendors.