Network ATC overview

Applies to: Azure Stack HCI, versions 23H2 and 22H2

Deployment and operation of Azure Stack HCI networking can be a complex and error-prone process. Due to the configuration flexibility provided with the host networking stack, there are many moving parts that can be easily misconfigured or overlooked. Staying up to date with the latest best practices is also a challenge as improvements are continuously made to the underlying technologies. Additionally, configuration consistency across HCI cluster nodes is important as it leads to a more reliable experience. Network ATC is the complete product name and not an acronym.

Applies to: Windows Server 2025

Deployment and operation of Windows Server cluster networking can be a complex and error-prone process. Due to the configuration flexibility provided with the host networking stack, there are many moving parts that can be easily misconfigured or overlooked. Staying up to date with the latest best practices is also a challenge as improvements are continuously made to the underlying technologies. Network ATC applies a consistency configuration across Windows Server cluster nodes to create a more reliable experience. As Network ATC is designed for Windows Server clusters, it requires Windows Server Datacenter edition and the Failover Clustering feature. Network ATC is the complete product name and not an acronym.

Network ATC can help:

  • Reduce host networking deployment time, complexity, and errors
  • Deploy the latest Microsoft validated and supported best practices
  • Ensure configuration consistency across the cluster
  • Eliminate configuration drift

Features

Network ATC provides the following features:

  • Windows Admin Center deployment: Network ATC is integrated with Windows Admin Center to provide an easy-to-use experience for deploying host networking.
  • Network symmetry: Network ATC configures and optimizes all adapters identically based on your configuration. Beginning with Azure Stack HCI 22H2, Network ATC also verifies the make, model, and speed of your network adapter to ensure network symmetry across all nodes of the cluster.
  • Network symmetry: Network ATC configures and optimizes all adapters identically based on your configuration. Network ATC also verifies the make, model, and speed of your network adapter to ensure network symmetry across all nodes of the cluster.
  • Storage adapter configuration: Network ATC automatically configures the following components for your storage network.

    • Physical adapter properties

    • Data Center Bridging

    • Determines if a virtual switch is needed, and if so, creates the required virtual adapters

    • Maps the virtual adapters to the appropriate physical adapter

    • Assigns VLANs

    • Beginning with Azure Stack HCI, version 22H2, Network ATC automatically assigns IP Addresses for storage adapters.

  • Network ATC automatically assigns IP Addresses for storage adapters.
  • Cluster network naming: Network ATC automatically names the cluster networks based on their usage. For example, the storage network might be named storage_compute(Storage_VLAN711).

  • Live Migration guidelines: Network ATC keeps you up to date with the recommended guidelines for Live Migration based on the operating system version (you can always override). Network ATC manages the following Live Migration settings:

    • The maximum number of simultaneous live migrations

    • The live migration network

    • The live migration transport

    • The maximum amount of SMBDirect (RDMA) bandwidth used for live migration

  • Proxy configuration: Network ATC can help you configure all cluster nodes with the same proxy configuration information if your environment requires it

  • Stretch S2D cluster support: Network ATC deploys the configuration required for the storage replica networks. Since these adapters need to route across subnets, Network ATC doesn't assign any IP addresses, so you need to assign the IP address.

  • Scope detection: Beginning with Azure Stack HCI 22H2, Network ATC automatically detects if you’re running the command on a cluster node. Meaning, you won’t need to use the -ClusterName parameter because it automatically detects the cluster that you're on.

To learn more about the features in Network ATC, see Network ATC: What's coming.

Terminology

To understand Network ATC, you need to understand some basic concepts. Here's some terminology used by Network ATC:

Intent: An intent is a definition of how you intend to use the physical adapters in your system. An intent has a friendly name, identifies one or more physical adapters, and includes one or more intent types.

An individual physical adapter can only be included in one intent. By default, an adapter doesn't have an intent (there's no special status or property given to adapters that don't have an intent). You can have multiple intents; the number of adapters in your system limits the number of intents you have.

Intent type: Every intent requires one or more intent types. Any combination of the intent types can be specified for any specific single intent. Here are the currently supported intent types and the maximum number of intents they can be defined in:

Management: Adapters are used for management access to nodes. This intent type can be defined in a maximum of one intent. Compute: Adapters are used to connect virtual machine (VM) traffic to the physical network. This intent type can be defined in unlimited intents. Storage: Adapters are used for SMB traffic, including Storage Spaces Direct. This intent type can be defined in a maximum of one intent. Stretch: Adapters are set up similarly to a storage intent, except RDMA can't be used with stretch intents. This intent type can be defined in a maximum of one intent

Override: By default, Network ATC deploys the most common configuration, asking for the smallest amount of user input. Overrides allow you to customize your deployment if necessary. For example, you might choose to modify the VLANs used for storage adapters from the defaults.

Network ATC allows you to modify all configuration that the OS allows. However, the OS limits some modifications to the OS and Network ATC respects these limitations. For example, a virtual switch doesn't allow modification of SR-IOV after it has been deployed.

Deployment example

The following video provides an overview of Network ATC using the Copy-NetIntent command to copy an intent from one cluster to another. To learn more about the demonstration, see our Tech Community article Deploying 100s of production clusters in minutes.

Next steps

To get started with Network ATC, review the following articles: