Create a network isolated Azure Kubernetes Service (AKS) cluster (Preview)
Organizations typically have strict security and compliance requirements to regulate egress (outbound) network traffic from a cluster to eliminate risks of data exfiltration. By default, Azure Kubernetes Service (AKS) clusters have unrestricted outbound internet access. This level of network access allows nodes and services you run to access external resources as needed. If you wish to restrict egress traffic, a limited number of ports and addresses must be accessible to maintain healthy cluster maintenance tasks.
One solution to securing outbound addresses is using a firewall device that can control outbound traffic based on domain names.
Another solution, a network isolated AKS cluster (preview), simplifies setting up outbound restrictions for a cluster out of the box. A network isolated AKS cluster reduces the risk of data exfiltration or unintentional exposure of cluster's public endpoints.
Important
AKS preview features are available on a self-service, opt-in basis. Previews are provided "as is" and "as available," and they're excluded from the service-level agreements and limited warranty. AKS previews are partially covered by customer support on a best-effort basis. As such, these features aren't meant for production use. For more information, see the following support articles:
Before you begin
- Read the conceptual overview of this feature, which provides an explanation of how network isolated clusters work. The overview article also:
- Explains the two access methods, AKS-managed ACR or BYO ACR, you can choose from in this article.
- Describes the current limitations.
Use the Bash environment in Azure Cloud Shell. For more information, see Quickstart for Bash in Azure Cloud Shell.
If you prefer to run CLI reference commands locally, install the Azure CLI. If you're running on Windows or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish the authentication process, follow the steps displayed in your terminal. For other sign-in options, see Sign in with the Azure CLI.
When you're prompted, install the Azure CLI extension on first use. For more information about extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the latest version, run az upgrade.
This article requires version 2.63.0 or later of the Azure CLI. If you're using Azure Cloud Shell, the latest version is already installed there.
Install the
aks-preview
Azure CLI extension version 9.0.0b2 or later.If you don't already have the
aks-preview
extension, install it using theaz extension add
command.az extension add --name aks-preview
If you already have the
aks-preview
extension, update it to make sure you have the latest version using theaz extension update
command.az extension update --name aks-preview
Register the
NetworkIsolatedClusterPreview
feature flag using the az feature register command.az feature register --namespace Microsoft.ContainerService --name NetworkIsolatedClusterPreview
Verify the registration status by using the az feature show command. It takes a few minutes for the status to show Registered:
az feature show --namespace Microsoft.ContainerService --name NetworkIsolatedClusterPreview
Note
If you choose to create network isolated cluster with API Server VNet Integration configured for private access of the API Server, then you need to repeat the above steps to register
EnableAPIServerVnetIntegrationPreview
feature flag too. When the status reflects Registered, refresh the registration of theMicrosoft.ContainerService
andMicrosoft.ContainerRegistry
resource providers by using the az provider register command:az provider register --namespace Microsoft.ContainerService az provider register --namespace Microsoft.ContainerRegistry
If you're choosing the Bring your own (BYO) Azure Container Registry (ACR) option, you need to ensure the ACR meets the following requirements:
- Anonymous pull access must be enabled for the ACR.
- The ACR needs to be of the Premium SKU service tier
(Optional) If you want to use any optional AKS feature or add-on which requires outbound network access, this document contains the outbound requirements for each feature. Also, this doc enumerates the features or add-ons that support private link integration for secure connection from within the cluster's virtual network. If private link integration is not available for any of these features, then the cluster can be set up with an user-defined routing table and an Azure Firewall based on the network rules and application rules required for that feature.
Note
The following AKS cluster extensions aren't supported yet on network isolated clusters:
Deploy a network isolated cluster with AKS-managed ACR
AKS creates, manages, and reconciles an ACR resource in this option. You don't need to assign any permissions or manage the ACR. AKS manages the cache rules, private link, and private endpoint used in the network isolated cluster.
Create a network isolated cluster
When creating a network isolated AKS cluster, you can choose one of the following private cluster modes - Private link or API Server Vnet Integration.
Regardless of the mode you select, you set --bootstrap-artifact-source
and --outbound-type
parameters.
--bootstrap-artifact-source
can be set to either Direct
or Cache
corresponding to using direct MCR (NOT network isolated) and private ACR (network isolated) for image pulls respectively.
The --outbound-type parameter
can be set to either none
or block
. If the outbound type is set to none
, then AKS doesn't set up any outbound connections for the cluster, allowing the user to configure them on their own. If the outbound type is set to block, then all outbound connections are blocked.
Private link
Create a private link-based network isolated AKS cluster by running the az aks create command with --bootstrap-artifact-source
, --enable-private-cluster
, and --outbound-type
parameters.
az aks create --resource-group ${RESOURCE_GROUP} --name ${AKS_NAME} --kubernetes-version 1.30.3 --bootstrap-artifact-source Cache --outbound-type none --network-plugin azure --enable-private-cluster
API Server VNet integration
Create a network isolated AKS cluster configured with API Server VNet Integration by running the az aks create command with --bootstrap-artifact-source
, --enable-private-cluster
, --enable-apiserver-vnet-integration
and --outbound-type
parameters.
az aks create --resource-group ${RESOURCE_GROUP} --name ${AKS_NAME} --kubernetes-version 1.30.3 --bootstrap-artifact-source Cache --outbound-type none --network-plugin azure --enable-private-cluster --enable-apiserver-vnet-integration
Update an existing AKS cluster to network isolated type
If you'd rather enable network isolation on an existing AKS cluster instead of creating a new cluster, use the az aks update command.
az aks update --resource-group ${RESOURCE_GROUP} --name ${AKS_NAME} --bootstrap-artifact-source Cache --outbound-type none
After the feature is enabled, any newly added nodes can bootstrap successfully without egress. When you enable network isolation on an existing cluster, keep in mind that you need to manually reimage all existing nodes.
az aks upgrade --resource-group ${RESOURCE_GROUP} --name ${AKS_NAME} --node-image-only
Important
Remember to reimage the cluster's node pools after you enable the network isolation mode for an existing cluster. Otherwise, the feature won't take effect for the cluster.
Deploy a network isolated cluster with bring your own ACR
AKS supports bringing your own (BYO) ACR. To support the BYO ACR scenario, you have to configure an ACR private endpoint and a private DNS zone before you create the AKS cluster.
The following steps show how to prepare these resources:
- Custom virtual network and subnets for AKS and ACR.
- ACR, ACR cache rule, private endpoint, and private DNS zone.
- Custom control plane identity and kubelet identity.
Step 1: Create the virtual network and subnets
The default outbound access for the AKS subnet must be false.
az group create --name ${RESOURCE_GROUP} --location ${LOCATION}
az network vnet create --resource-group ${RESOURCE_GROUP} --name ${VNET_NAME} --address-prefixes 192.168.0.0/16
az network vnet subnet create --name ${AKS_SUBNET_NAME} --vnet-name ${VNET_NAME} --resource-group ${RESOURCE_GROUP} --address-prefixes 192.168.1.0/24 --default-outbound-access false
SUBNET_ID=$(az network vnet subnet show --name ${AKS_SUBNET_NAME} --vnet-name ${VNET_NAME} --resource-group ${RESOURCE_GROUP} --query 'id' --output tsv)
az network vnet subnet create --name ${ACR_SUBNET_NAME} --vnet-name ${VNET_NAME} --resource-group ${RESOURCE_GROUP} --address-prefixes 192.168.2.0/24 --private-endpoint-network-policies Disabled
Step 2: Create the ACR and enable artifact cache
Create the ACR with the private link and anonymous pull access.
az acr create --resource-group ${RESOURCE_GROUP} --name ${REGISTRY_NAME} --sku Premium --public-network-enabled false az acr update --resource-group ${RESOURCE_GROUP} --name ${REGISTRY_NAME} --anonymous-pull-enabled true REGISTRY_ID=$(az acr show --name ${REGISTRY_NAME} -g ${RESOURCE_GROUP} --query 'id' --output tsv)
Create an ACR cache rule to allow users to cache MCR container images in the new ACR.
az acr cache create -n acr-cache-rule -r ${REGISTRY_NAME} -g ${RESOURCE_GROUP} --source-repo "mcr.microsoft.com/*" --target-repo "*"
Step 3: Create a private endpoint for the ACR
az network private-endpoint create --name myPrivateEndpoint --resource-group ${RESOURCE_GROUP} --vnet-name ${VNET_NAME} --subnet ${ACR_SUBNET_NAME} --private-connection-resource-id ${REGISTRY_ID} --group-id registry --connection-name myConnection
NETWORK_INTERFACE_ID=$(az network private-endpoint show --name myPrivateEndpoint --resource-group ${RESOURCE_GROUP} --query 'networkInterfaces[0].id' --output tsv)
REGISTRY_PRIVATE_IP=$(az network nic show --ids ${NETWORK_INTERFACE_ID} --query "ipConfigurations[?privateLinkConnectionProperties.requiredMemberName=='registry'].privateIPAddress" --output tsv)
DATA_ENDPOINT_PRIVATE_IP=$(az network nic show --ids ${NETWORK_INTERFACE_ID} --query "ipConfigurations[?privateLinkConnectionProperties.requiredMemberName=='registry_data_$LOCATION'].privateIPAddress" --output tsv)
Step 4: Create a private DNS zone and add records
Create a private DNS zone named privatelink.azurecr.io
. Add the records for the registry REST endpoint {REGISTRY_NAME}.azurecr.io
, and the registry data endpoint {REGISTRY_NAME}.{REGISTRY_LOCATION}.data.azurecr.io
.
az network private-dns zone create --resource-group ${RESOURCE_GROUP} --name "privatelink.azurecr.io"
az network private-dns link vnet create --resource-group ${RESOURCE_GROUP} --zone-name "privatelink.azurecr.io" --name MyDNSLink --virtual-network ${VNET_NAME} --registration-enabled false
az network private-dns record-set a create --name ${REGISTRY_NAME} --zone-name "privatelink.azurecr.io" --resource-group ${RESOURCE_GROUP}
az network private-dns record-set a add-record --record-set-name ${REGISTRY_NAME} --zone-name "privatelink.azurecr.io" --resource-group ${RESOURCE_GROUP} --ipv4-address ${REGISTRY_PRIVATE_IP}
az network private-dns record-set a create --name ${REGISTRY_NAME}.${LOCATION}.data --zone-name "privatelink.azurecr.io" --resource-group ${RESOURCE_GROUP}
az network private-dns record-set a add-record --record-set-name ${REGISTRY_NAME}.${LOCATION}.data --zone-name "privatelink.azurecr.io" --resource-group ${RESOURCE_GROUP} --ipv4-address ${DATA_ENDPOINT_PRIVATE_IP}
Step 5: Create control plane and kubelet identities
Control plane identity
az identity create --name ${CLUSTER_IDENTITY_NAME} --resource-group ${RESOURCE_GROUP}
CLUSTER_IDENTITY_RESOURCE_ID=$(az identity show --name ${CLUSTER_IDENTITY_NAME} --resource-group ${RESOURCE_GROUP} --query 'id' -o tsv)
CLUSTER_IDENTITY_PRINCIPAL_ID=$(az identity show --name ${CLUSTER_IDENTITY_NAME} --resource-group ${RESOURCE_GROUP} --query 'principalId' -o tsv)
Kubelet identity
az identity create --name ${KUBELET_IDENTITY_NAME} --resource-group ${RESOURCE_GROUP}
KUBELET_IDENTITY_RESOURCE_ID=$(az identity show --name ${KUBELET_IDENTITY_NAME} --resource-group ${RESOURCE_GROUP} --query 'id' -o tsv)
KUBELET_IDENTITY_PRINCIPAL_ID=$(az identity show --name ${KUBELET_IDENTITY_NAME} --resource-group ${RESOURCE_GROUP} --query 'principalId' -o tsv)
Grant AcrPull permissions for the Kubelet identity
az role assignment create --role AcrPull --scope ${REGISTRY_ID} --assignee-object-id ${KUBELET_IDENTITY_PRINCIPAL_ID} --assignee-principal-type ServicePrincipal
After you configure these resources, you can proceed to create the network isolated AKS cluster with BYO ACR.
Step 6: Create network isolated cluster using the BYO ACR
When creating a network isolated AKS cluster, you can choose one of the following private cluster modes - Private link or API Server Vnet Integration.
Regardless of the mode you select, you set --bootstrap-artifact-source
and --outbound-type
parameters.
--bootstrap-artifact-source
can be set to either Direct
or Cache
corresponding to using direct MCR (NOT network isolated) and private ACR (network isolated) for image pulls respectively.
The --outbound-type parameter
can be set to either none
or block
. If the outbound type is set to none
, then AKS doesn't set up any outbound connections for the cluster, allowing the user to configure them on their own. If the outbound type is set to block, then all outbound connections are blocked.
Private link
Create a private link-based network isolated cluster that accesses your ACR by running the az aks create command with the required parameters.
az aks create --resource-group ${RESOURCE_GROUP} --name ${AKS_NAME} --kubernetes-version 1.30.3 --vnet-subnet-id ${SUBNET_ID} --assign-identity ${CLUSTER_IDENTITY_RESOURCE_ID} --assign-kubelet-identity ${KUBELET_IDENTITY_RESOURCE_ID} --bootstrap-artifact-source Cache --bootstrap-container-registry-resource-id ${REGISTRY_ID} --outbound-type none --network-plugin azure --enable-private-cluster
API Server VNet integration
For a network isolated cluster with API server VNet integration, first create a subnet and assign the correct role with the following commands:
az network vnet subnet create --name ${APISERVER_SUBNET_NAME} --vnet-name ${VNET_NAME} --resource-group ${RESOURCE_GROUP} --address-prefixes 192.168.3.0/24
export APISERVER_SUBNET_ID=$(az network vnet subnet show --resource-group ${RESOURCE_GROUP} --vnet-name ${VNET_NAME} --name ${APISERVER_SUBNET_NAME} --query id -o tsv)
az role assignment create --scope ${APISERVER_SUBNET_ID} --role "Network Contributor" --assignee-object-id ${CLUSTER_IDENTITY_PRINCIPAL_ID} --assignee-principal-type ServicePrincipal
Create a network isolated AKS cluster configured with API Server VNet Integration and access your ACR by running the az aks create command with the required parameters.
az aks create --resource-group ${RESOURCE_GROUP} --name ${AKS_NAME} --kubernetes-version 1.30.3 --vnet-subnet-id ${SUBNET_ID} --assign-identity ${CLUSTER_IDENTITY_RESOURCE_ID} --assign-kubelet-identity ${KUBELET_IDENTITY_RESOURCE_ID} --bootstrap-artifact-source Cache --bootstrap-container-registry-resource-id ${REGISTRY_ID} --outbound-type none --network-plugin azure --enable-apiserver-vnet-integration --apiserver-subnet-id ${APISERVER_SUBNET_ID}
Update an existing AKS cluster
If you'd rather enable network isolation on an existing AKS cluster instead of creating a new cluster, use the az aks update command.
When creating the private endpoint and private DNS zone for the BYO ACR, use the existing virtual network and subnets of the existing AKS cluster. When you assign the AcrPull permission to the kubelet identity, use the existing kubelet identity of the existing AKS cluster.
To enable the network isolated feature on an existing AKS cluster, use the following command:
az aks update --resource-group ${RESOURCE_GROUP} --name ${AKS_NAME} --bootstrap-artifact-source Cache --bootstrap-container-registry-resource-id ${REGISTRY_ID} --outbound-type none
After the network isolated cluster feature is enabled, nodes in the newly added node pool can bootstrap successfully without egress. You must reimage existing node pools so that newly scaled node can bootstrap successfully. When you enable the feature on an existing cluster, you need to manually reimage all existing nodes.
az aks upgrade --resource-group ${RESOURCE_GROUP} --name ${AKS_NAME} --node-image-only
Important
Remember to reimage the cluster's node pools after you enable the network isolated cluster feature. Otherwise, the feature won't take effect for the cluster.
Update your ACR ID
It's possible to update the private ACR used with a network isolated AKS cluster. To identify the ACR resource ID, use the az aks show
command.
az aks show --resource-group ${RESOURCE_GROUP} --name ${AKS_NAME}
Updating the ACR ID is performed by running the az aks update
command with the --bootstrap-artifact-source
and --bootstrap-container-registry-resource-id
parameters.
az aks update --resource-group ${RESOURCE_GROUP} --name ${AKS_NAME} --bootstrap-artifact-source Cache --bootstrap-container-registry-resource-id <New BYO ACR resource ID>
When you update the ACR ID on an existing cluster, you need to manually reimage all existing nodes.
az aks upgrade --resource-group ${RESOURCE_GROUP} --name ${AKS_NAME} --node-image-only
Important
Remember to reimage the cluster's node pools after you enable the network isolated cluster feature. Otherwise, the feature won't take effect for the cluster.
Validate that network isolated cluster is enabled
To validate the network isolated cluster feature is enabled, use the `az aks show command
az aks show --resource-group ${RESOURCE_GROUP} --name ${AKS_NAME}
The following output shows that the feature is enabled, based on the values of the outboundType
property (none or blocked) and artifactSource
property (Cached).
"kubernetesVersion": "1.30.3",
"name": "myAKSCluster"
"type": "Microsoft.ContainerService/ManagedClusters"
"properties": {
...
"networkProfile": {
...
"outboundType": "none",
...
},
...
"bootstrapProfile": {
"artifactSource": "Cache",
"containerRegistryId": "/subscriptions/my-subscription-id/my-node-resource-group-name/providers/Microsoft.ContainerRegistry/registries/my-registry-name"
},
...
}
Disable network isolated cluster
Disable the network isolated cluster feature by running the az aks update
command with the --bootstrap-artifact-source
and --outbound-type
parameters.
az aks update --resource-group ${RESOURCE_GROUP} --name ${AKS_NAME} --bootstrap-artifact-source Direct --outbound-type LoadBalancer
When you disable the feature on an existing cluster, you need to manually reimage all existing nodes.
az aks upgrade --resource-group ${RESOURCE_GROUP} --name ${AKS_NAME} --node-image-only
Important
Remember to reimage the cluster's node pools after you disable the network isolated cluster feature. Otherwise, the feature won't take effect for the cluster.
Next steps
In this article, you learned what ports and addresses to allow if you want to restrict egress traffic for the cluster.
If you want to set up outbound restriction configuration using Azure Firewall, visit Control egress traffic using Azure Firewall in AKS.
If you want to restrict how pods communicate between themselves and East-West traffic restrictions within cluster, see Secure traffic between pods using network policies in AKS.
Azure Kubernetes Service