Freigeben über


Build Your Private Cloud Foundation - Networking with Windows Server 2012

When planning network infrastructure for a Private Cloud, there’s a number of technologies to consider and leverage in Windows Server 2012.  At many of my events, IT Pros have lots of questions on when to use various network technologies in their overall Private Cloud design, such as:

  • Teaming modes: Switch independent, Static or Dynamic ( LACP ) network teaming?
  • Load balancing modes: Address hash or Hyper-V port hash?
  • Network isolation: VLAN tagging vs Windows Network Virtualization ( aka WNV or NV-GRE )?

In this article, I’ll walk through a sample architecture for building a Private Cloud network foundation and discuss “when” to leverage each of these components in your overall design.  At the end of this article, I’ll also provide additional references for “how” to implement each configuration item.

  • Did you miss the other articles in this series?

    DO IT:
    Get the FULL SERIES of articles in Build Your Private Cloud in a Month at https://aka.ms/BuildYourCloud

Private Cloud Foundational Architecture

When building a Private Cloud, most environments find that they really have two main types of network connections from their Hyper-V hosts: Datacenter Networks and Client VM Networks.

image
Sample Network Architecture for Private Cloud Foundation – Datacenter and Client VM networks

Datacenter Networks

Datacenter networks are the “back-end” networks that Hyper-V hosts use for:

  • Connecting to storage
  • Live migration of running VMs
  • Internal Cluster networks
  • Host management networks
  • Data backup networks

These datacenter networks tend to be relatively static in nature – once the “back-end” network architecture is implemented, it doesn’t usually change drastically until another major network upgrade window occurs.  Commonly, these networks will connect within the datacenter via a set of intelligent core network switches that have a backplane optimized for a high level of concurrency between network ports.

For datacenter networks of this nature, I see most organizations trending towards using dual 1GbE or 10GbE NICs in a teamed configuration for load balancing and redundancy – 10GbE NICs are growing in popularity for new installations to provide higher bandwidth, consolidated physical connections and advanced features, such as RDMA which can be leveraged for high-speed data transfers across storage networks using the newly updated SMB 3.0 network protocol. When implementing a NIC team for datacenter networks, you will generally see the best load balancing and overall performance by configuring your team with:

  • Teaming mode: Use Static or Dynamic ( LACP ) network teaming modes with intelligent core switches that provide support for teaming
     
  • Load balancing mode: Use Address hashing for network load balancing mode to gain the best level of bidirectional load balancing
     
  • Network isolation: Use separate NIC Team Interfaces with individual VLANs configured for each type of datacenter network traffic
     
image Datacenter Networks – Teaming Parameters image Datacenter Networks – Separate Team Interfaces for each VLAN

This configuration works well for static datacenter networks because it provides resilient, high-speed network connectivity while leveraging VLANs to isolate major categories of back-end network traffic for security and QoS.  VLANs and VLAN tags are well understood in modern datacenters, and this approach implements a network architecture that integrates well with other hardware devices that may exist in the datacenter.

Client VM Networks

If datacenter networks are “back-end” networks, think of Client VM Networks as the “front-end” networks that carry client-server traffic between client devices and VMs.  In contrast to the relatively static datacenter networks discussed above, Client VM Networks tend to be much more dynamic in nature.  As new sets of applications are brought online for “customers”, which could be internal or external application consumers, it is often advantageous to be able to isolate traffic from collections of applications for security, routing or QoS purposes.  However, minimizing network router and switch reconfiguration for these dynamic networks is also important to delivering a maintainable Private Cloud solution.

While VLANs could be used to isolate Client VM Traffic from different applications, there are a couple common limitations that you’ll likely encounter if you have a larger environment that is hosting lots of applications for lots of “customers”.  First, managing large numbers of VLANs in an enterprise network can be complex – a level of complexity that often needs to “touch” many Layer-2 devices with every VLAN change to handle traffic efficiently.  Second, most network switches have finite limits on the maximum number of VLANs that they can handle concurrently – even though many intelligent switches appear to support VLANs with a maximum ID value of 4,094, most switches cannot efficiently process traffic from more than 1,000 VLANs concurrently.

To provide resiliency, load balancing and traffic isolation for these dynamic Client VM Networks, you will generally see the best results by configuring your network teams for:

  • Teaming mode: Switch independent – this mode supports even non-intelligent Layer-2 network switches and does not require configuration changes on intelligent network switches.
     
  • Load balancing mode: Hyper-V port – this mode distributes the virtual MAC addresses of each VM’s virtual NICs evenly across all available physical network adapters in the team.  By doing so, you will gain inbound and outbound load balancing of aggregate VM network traffic on each Hyper-V host. However, each VM virtual NIC will be limited to the maximum bandwidth available via 1 physical adapter in the network team.
     
  • Network isolation: Windows Network Virtualization ( aka WNV or NV-GRE ) – Windows Network Virtualization isolates network traffic on each VM network by using GRE tunneling between Hyper-V hosts.  As such, we can use this configuration to effectively separate traffic between Hyper-V hosts on a VM-by-VM or Application-by-Application basis without any configuration changes required on intermediary switches. … and no VLANs to manage!
     
    NOTE: If your physical network configuration requires that you absolutely MUST use VLANs to isolate VM traffic, you can certainly still do so – Hyper-V supports VLAN tags in the VM Settings of each VM.
     
image Client VM Networks – Teaming Parameters image Client VM Networks – Only Default VLAN configured, because WNV used for isolation

OK! I’m ready for the “How” …

Once you’ve decided when to use each of the teaming modes, load balancing modes and network isolation options in your Private Cloud network foundation, leverage these great resources to step through how to configure each component:

What’s Next? Check out the rest of the series!

This article is part of a series of articles on Building Your Private Cloud with Windows Server 2012, Hyper-V Server 2012, System Center 2012 SP1 and Windows Azure.  Check out the complete series at:

And, as you read along in this series, be sure to download each product so that you’re prepared to follow along through the configuration steps as you go …

Comments

  • Anonymous
    December 23, 2013
    December 23rd, 2013: Updated to include additional resources ... Module 0: Added links for New FREE EBOOKS and Documentation for Windows Server 2012 R2, System Center 2012 R2 VMM and Windows Azure Pack Module 1: Added links to Datacenter TCO
  • Anonymous
    December 23, 2013
    December 23rd, 2013: Updated to include additional resources ... Module 0: Added links for New FREE EBOOKS and Documentation for Windows Server 2012 R2, System Center 2012 R2 VMM and Windows Azure Pack Module 1: Added links to Datacenter TCO
  • Anonymous
    April 11, 2014
    April 11, 2014: Updated to include additional resources ...

    Take this Build Your Cloud series with you "on-the-go" ... Download our FREE Windows Phone app! Built for Windows Phone using App Studio


    My fellow Technical Evangelists