Quickstart: Deploy a private Azure Kubernetes Service (AKS) Automatic cluster (preview) in a custom virtual network
Applies to: ✔️ AKS Automatic (preview)
Azure Kubernetes Service (AKS) Automatic (preview) provides the easiest managed Kubernetes experience for developers, DevOps engineers, and platform engineers. Ideal for modern and AI applications, AKS Automatic automates AKS cluster setup and operations and embeds best practice configurations. Users of any skill level can benefit from the security, performance, and dependability of AKS Automatic for their applications.
In this quickstart, you learn to:
- Create a virtual network.
- Create a managed identity with permissions over the virtual network.
- Deploy a private AKS Automatic cluster in the virtual network.
- Connect to the private cluster.
- Run a sample multi-container application with a group of microservices and web front ends simulating a retail scenario.
Before you begin
This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see Kubernetes core concepts for Azure Kubernetes Service (AKS).
Use the Bash environment in Azure Cloud Shell. For more information, see Quickstart for Bash in Azure Cloud Shell.
If you prefer to run CLI reference commands locally, install the Azure CLI. If you're running on Windows or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish the authentication process, follow the steps displayed in your terminal. For other sign-in options, see Sign in with the Azure CLI.
When you're prompted, install the Azure CLI extension on first use. For more information about extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the latest version, run az upgrade.
- This article requires version 2.68 or later of the Azure CLI. If you're using Azure Cloud Shell, the latest version is already installed there.
- This article requires the
aks-preview
Azure CLI extension version 13.0.0b3 or later. - If you have multiple Azure subscriptions, select the appropriate subscription ID in which the resources should be billed using the az account set command.
- Register the
AutomaticSKUPreview
feature in your Azure subscription. - The identity creating the cluster should also have the following permissions on the resource group:
Microsoft.Authorization/policyAssignments/write
Microsoft.Authorization/policyAssignments/read
- AKS Automatic clusters with custom virtual networks only support user assigned managed identity.
- AKS Automatic clusters with custom virtual networks don't support the Managed NAT Gateway outbound type.
- AKS Automatic clusters require deployment in Azure regions that support at least three availability zones.
- To deploy a Bicep file, you need to write access on the resources you create and access to all operations on the
Microsoft.Resources/deployments
resource type. For example, to create a virtual machine, you needMicrosoft.Compute/virtualMachines/write
andMicrosoft.Resources/deployments/*
permissions. For a list of roles and permissions, see Azure built-in roles.
When using a custom virtual network with AKS Automatic, you must create and delegate an API server subnet to Microsoft.ContainerService/managedClusters
, which grants the AKS service permissions to inject the API server pods and internal load balancer into that subnet. You can't use the subnet for any other workloads, but you can use it for multiple AKS clusters located in the same virtual network. The minimum supported API server subnet size is a /28.
The cluster identity needs Network Contributor permissions on the virtual network. Lack of permissions at the API server subnet can cause a provisioning failure. Lack of permissions at the virtual network can cause Node Auto Provisioning scaling failure.
Warning
An AKS cluster reserves at least 9 IPs in the subnet address space. Running out of IP addresses may prevent API server scaling and cause an API server outage.
Important
AKS Automatic tries to dynamically select a virtual machine size for the system
node pool based on the capacity available in the subscription. Make sure your subscription has quota for 16 vCPUs of any of the following sizes in the region you're deploying the cluster to: Standard_D4pds_v5, Standard_D4lds_v5, Standard_D4ads_v5, Standard_D4ds_v5, Standard_D4d_v5, Standard_D4d_v4, Standard_DS3_v2, Standard_DS12_v2. You can view quotas for specific VM-families and submit quota increase requests through the Azure portal.
Install the aks-preview Azure CLI extension
Important
AKS preview features are available on a self-service, opt-in basis. Previews are provided "as is" and "as available," and they're excluded from the service-level agreements and limited warranty. AKS previews are partially covered by customer support on a best-effort basis. As such, these features aren't meant for production use. For more information, see the following support articles:
To install the aks-preview extension, run the following command:
az extension add --name aks-preview
Run the following command to update to the latest version of the extension released:
az extension update --name aks-preview
Register the feature flags
To use AKS Automatic in preview, register the following flag using the az feature register command.
az feature register --namespace Microsoft.ContainerService --name AutomaticSKUPreview
Verify the registration status by using the az feature show command. It takes a few minutes for the status to show Registered:
az feature show --namespace Microsoft.ContainerService --name AutomaticSKUPreview
When the status reflects Registered, refresh the registration of the Microsoft.ContainerService resource provider by using the az provider register command:
az provider register --namespace Microsoft.ContainerService
Define variables
Define the following variables that will be used in the subsequent steps.
RG_NAME=automatic-rg
VNET_NAME=automatic-vnet
CLUSTER_NAME=automatic
IDENTITY_NAME=automatic-uami
LOCATION=eastus
SUBSCRIPTION_ID=$(az account show --query id -o tsv)
Create a resource group
An Azure resource group is a logical group in which Azure resources are deployed and managed.
Create a resource group using the az group create command.
az group create -n ${RG_NAME} -l ${LOCATION}
The following sample output resembles successful creation of the resource group:
{
"id": "/subscriptions/<guid>/resourceGroups/automatic-rg",
"location": "eastus",
"managedBy": null,
"name": "automatic-rg",
"properties": {
"provisioningState": "Succeeded"
},
"tags": null
}
Create a virtual network
Create a virtual network using the az network vnet create
command. Create an API server subnet and cluster subnet using the az network vnet subnet create
command. The API subnet needs a delegation to Microsoft.ContainerService/managedClusters
.
az network vnet create --name ${VNET_NAME} \
--resource-group ${RG_NAME} \
--location ${LOCATION} \
--address-prefixes 172.19.0.0/16
az network vnet subnet create --resource-group ${RG_NAME} \
--vnet-name ${VNET_NAME} \
--name apiServerSubnet \
--delegations Microsoft.ContainerService/managedClusters \
--address-prefixes 172.19.0.0/28
az network vnet subnet create --resource-group ${RG_NAME} \
--vnet-name ${VNET_NAME} \
--name clusterSubnet \
--address-prefixes 172.19.1.0/24
All traffic within the virtual network is allowed by default. But if you added Network Security Group (NSG) rules to restrict traffic between different subnets, ensure that the NSG security rules permit the following types of communication:
Destination | Source | Protocol | Port | Use |
---|---|---|---|---|
APIServer Subnet CIDR | Cluster Subnet | TCP | 443 and 4443 | Required to enable communication between Nodes and the API server. |
APIServer Subnet CIDR | Azure Load Balancer | TCP | 9988 | Required to enable communication between Azure Load Balancer and the API server. You can also enable all communication between the Azure Load Balancer and the API Server Subnet CIDR. |
Create a managed identity and give it permissions on the virtual network
Create a managed identity using the az identity create
command and retrieve the client ID. Assign the Network Contributor role on virtual network to the managed identity using the az role assignment create
command.
az identity create \
--resource-group ${RG_NAME} \
--name ${IDENTITY_NAME} \
--location ${LOCATION}
IDENTITY_CLIENT_ID=$(az identity show --resource-group ${RG_NAME} --name ${IDENTITY_NAME} --query clientId -o tsv)
az role assignment create \
--scope "/subscriptions/${SUBSCRIPTION_ID}/resourceGroups/${RG_NAME}/providers/Microsoft.Network/virtualNetworks/${VNET_NAME}" \
--role "Network Contributor" \
--assignee ${IDENTITY_CLIENT_ID}
Create a private AKS Automatic cluster in a custom virtual network
To create a private AKS Automatic cluster, use the az aks create command. Note the use of the --enable-private-cluster
flag.
az aks create \
--resource-group ${RG_NAME} \
--name ${CLUSTER_NAME} \
--location ${LOCATION} \
--apiserver-subnet-id "/subscriptions/${SUBSCRIPTION_ID}/resourceGroups/${RG_NAME}/providers/Microsoft.Network/virtualNetworks/${VNET_NAME}/subnets/apiServerSubnet" \
--vnet-subnet-id "/subscriptions/${SUBSCRIPTION_ID}/resourceGroups/${RG_NAME}/providers/Microsoft.Network/virtualNetworks/${VNET_NAME}/subnets/clusterSubnet" \
--assign-identity "/subscriptions/${SUBSCRIPTION_ID}/resourcegroups/${RG_NAME}/providers/Microsoft.ManagedIdentity/userAssignedIdentities/${IDENTITY_NAME}" \
--sku automatic \
--enable-private-cluster \
--no-ssh-key
After a few minutes, the command completes and returns JSON-formatted information about the cluster.
Connect to the cluster
When an AKS Automatic cluster is created as a private cluster, the API server endpoint has no public IP address. To manage the API server, for example via kubectl
, you need to connect through a machine that has access to the cluster's Azure virtual network. There are several options for establishing network connectivity to the private cluster. Refer to Options for connecting to the private cluster for more information.
To manage a Kubernetes cluster, use the Kubernetes command-line client, kubectl. kubectl
is already installed if you use Azure Cloud Shell. To install kubectl
locally, run the az aks install-cli command. AKS Automatic clusters are configured with Microsoft Entra ID for Kubernetes role-based access control (RBAC).
When you create a cluster using the Azure CLI, your user is assigned built-in roles for Azure Kubernetes Service RBAC Cluster Admin
.
Configure kubectl
to connect to your Kubernetes cluster using the az aks get-credentials command. This command downloads credentials and configures the Kubernetes CLI to use them.
az aks get-credentials --resource-group ${RG_NAME} --name ${CLUSTER_NAME}
Verify the connection to your cluster using the kubectl get command. This command returns a list of the cluster nodes.
kubectl get nodes
The following sample output shows how you're asked to log in.
To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code AAAAAAAAA to authenticate.
After you log in, the following sample output shows the managed system node pools. Make sure the node status is Ready.
NAME STATUS ROLES AGE VERSION
aks-nodepool1-13213685-vmss000000 Ready agent 2m26s v1.28.5
aks-nodepool1-13213685-vmss000001 Ready agent 2m26s v1.28.5
aks-nodepool1-13213685-vmss000002 Ready agent 2m26s v1.28.5
Create a resource group
An Azure resource group is a logical group in which Azure resources are deployed and managed. When you create a resource group, you're prompted to specify a location. This location is the storage location of your resource group metadata and where your resources run in Azure if you don't specify another region during resource creation.
Create a resource group using the az group create command.
az group create --name <resource-group> --location <location>
The following sample output resembles successful creation of the resource group:
{
"id": "/subscriptions/<guid>/resourceGroups/myResourceGroup",
"location": "eastus",
"managedBy": null,
"name": "myResourceGroup",
"properties": {
"provisioningState": "Succeeded"
},
"tags": null
}
Create a virtual network
This Bicep file defines a virtual network.
@description('The location of the managed cluster resource.')
param location string = resourceGroup().location
@description('The name of the virtual network.')
param vnetName string = 'aksAutomaticVnet'
@description('The address prefix of the virtual network.')
param addressPrefix string = '172.19.0.0/16'
@description('The name of the API server subnet.')
param apiServerSubnetName string = 'apiServerSubnet'
@description('The subnet prefix of the API server subnet.')
param apiServerSubnetPrefix string = '172.19.0.0/28'
@description('The name of the cluster subnet.')
param clusterSubnetName string = 'clusterSubnet'
@description('The subnet prefix of the cluster subnet.')
param clusterSubnetPrefix string = '172.19.1.0/24'
// Virtual network with an API server subnet and a cluster subnet
resource virtualNetwork 'Microsoft.Network/virtualNetworks@2023-09-01' = {
name: vnetName
location: location
properties: {
addressSpace: {
addressPrefixes: [ addressPrefix ]
}
subnets: [
{
name: apiServerSubnetName
properties: {
addressPrefix: apiServerSubnetPrefix
}
}
{
name: clusterSubnetName
properties: {
addressPrefix: clusterSubnetPrefix
}
}
]
}
}
output apiServerSubnetId string = resourceId('Microsoft.Network/virtualNetworks/subnets', vnetName, apiServerSubnetName)
output clusterSubnetId string = resourceId('Microsoft.Network/virtualNetworks/subnets', vnetName, clusterSubnetName)
Save the Bicep file virtualNetwork.bicep to your local computer.
Important
The Bicep file sets the vnetName
param to aksAutomaticVnet, the addressPrefix
param to 172.19.0.0/16, the apiServerSubnetPrefix
param to 172.19.0.0/28, and the apiServerSubnetPrefix
param to 172.19.1.0/24. If you want to use different values, make sure to update the strings to your preferred values.
Deploy the Bicep file using the Azure CLI.
az deployment group create --resource-group <resource-group> --template-file virtualNetwork.bicep
All traffic within the virtual network is allowed by default. But if you added Network Security Group (NSG) rules to restrict traffic between different subnets, ensure that the NSG security rules permit the following types of communication:
Destination | Source | Protocol | Port | Use |
---|---|---|---|---|
APIServer Subnet CIDR | Cluster Subnet | TCP | 443 and 4443 | Required to enable communication between Nodes and the API server. |
APIServer Subnet CIDR | Azure Load Balancer | TCP | 9988 | Required to enable communication between Azure Load Balancer and the API server. You can also enable all communication between the Azure Load Balancer and the API Server Subnet CIDR. |
Create a managed identity
This Bicep file defines a user assigned managed identity.
param location string = resourceGroup().location
param uamiName string = 'aksAutomaticUAMI'
resource userAssignedManagedIdentity 'Microsoft.ManagedIdentity/userAssignedIdentities@2023-01-31' = {
name: uamiName
location: location
}
output uamiId string = userAssignedManagedIdentity.id
output uamiPrincipalId string = userAssignedManagedIdentity.properties.principalId
output uamiClientId string = userAssignedManagedIdentity.properties.clientId
Save the Bicep file uami.bicep to your local computer.
Important
The Bicep file sets the uamiName
param to the aksAutomaticUAMI. If you want to use a different identity name, make sure to update the string to your preferred name.
Deploy the Bicep file using the Azure CLI.
az deployment group create --resource-group <resource-group> --template-file uami.bicep
Assign the Network Contributor role over the virtual network
This Bicep file defines role assignments over the virtual network.
@description('The name of the virtual network.')
param vnetName string = 'aksAutomaticVnet'
@description('The principal ID of the user assigned managed identity.')
param uamiPrincipalId string
// Get a reference to the virtual network
resource virtualNetwork 'Microsoft.Network/virtualNetworks@2023-09-01' existing ={
name: vnetName
}
// Assign the Network Contributor role to the user assigned managed identity on the virtual network
// '4d97b98b-1d4f-4787-a291-c67834d212e7' is the built-in Network Contributor role definition
// See: https://learn.microsoft.com/en-us/azure/role-based-access-control/built-in-roles/networking#network-contributor
resource networkContributorRoleAssignmentToVirtualNetwork 'Microsoft.Authorization/roleAssignments@2022-04-01' = {
name: guid(uamiPrincipalId, '4d97b98b-1d4f-4787-a291-c67834d212e7', resourceGroup().id, virtualNetwork.name)
scope: virtualNetwork
properties: {
roleDefinitionId: resourceId('Microsoft.Authorization/roleDefinitions', '4d97b98b-1d4f-4787-a291-c67834d212e7')
principalId: uamiPrincipalId
}
}
Save the Bicep file roleAssignments.bicep to your local computer.
Important
The Bicep file sets the vnetName
param to aksAutomaticVnet. If you used a different virtual network name, make sure to update the string to your preferred virtual network name.
Deploy the Bicep file using the Azure CLI. You need to provide the user assigned identity principal ID.
az deployment group create --resource-group <resource-group> --template-file roleAssignments.bicep \
--parameters uamiPrincipalId=<user assigned identity prinicipal id>
Create a private AKS Automatic cluster in a custom virtual network
This Bicep file defines the AKS Automatic cluster.
@description('The name of the managed cluster resource.')
param clusterName string = 'aksAutomaticCluster'
@description('The location of the managed cluster resource.')
param location string = resourceGroup().location
@description('The resource ID of the API server subnet.')
param apiServerSubnetId string
@description('The resource ID of the cluster subnet.')
param clusterSubnetId string
@description('The resource ID of the user assigned managed identity.')
param uamiId string
/// Create the private AKS Automatic cluster using the custom virtual network and user assigned managed identity
resource aks 'Microsoft.ContainerService/managedClusters@2024-03-02-preview' = {
name: clusterName
location: location
sku: {
name: 'Automatic'
}
properties: {
agentPoolProfiles: [
{
name: 'systempool'
mode: 'System'
count: 3
vnetSubnetID: clusterSubnetId
}
]
apiServerAccessProfile: {
subnetId: apiServerSubnetId
enablePrivateCluster: true
}
networkProfile: {
outboundType: 'loadBalancer'
}
}
identity: {
type: 'UserAssigned'
userAssignedIdentities: {
'${uamiId}': {}
}
}
}
Save the Bicep file aks.bicep to your local computer.
Important
The Bicep file sets the clusterName
param to aksAutomaticCluster. If you want a different cluster name, make sure to update the string to your preferred cluster name.
Deploy the Bicep file using the Azure CLI. You need to provide the API server subnet resource ID, the cluster subnet resource ID, and user assigned identity principal ID.
az deployment group create --resource-group <resource-group> --template-file aks.bicep \
--parameters apiServerSubnetId=<API server subnet resource id> \
--parameters clusterSubnetId=<cluster subnet resource id> \
--parameters uamiPrincipalId=<user assigned identity prinicipal id>
Connect to the cluster
When an AKS Automatic cluster is created as a private cluster, the API server endpoint has no public IP address. To manage the API server, for example via kubectl
, you need to connect through a machine that has access to the cluster's Azure virtual network. There are several options for establishing network connectivity to the private cluster. Refer to Options for connecting to the private cluster for more information.
To manage a Kubernetes cluster, use the Kubernetes command-line client, kubectl. kubectl
is already installed if you use Azure Cloud Shell. To install kubectl
locally, run the az aks install-cli command. AKS Automatic clusters are configured with Microsoft Entra ID for Kubernetes role-based access control (RBAC).
Important
When you create a cluster using Bicep, you need to assign one of the built-in roles such as Azure Kubernetes Service RBAC Reader
, Azure Kubernetes Service RBAC Writer
, Azure Kubernetes Service RBAC Admin
, or Azure Kubernetes Service RBAC Cluster Admin
to your users, scoped to the cluster or a specific namespace, example using az role assignment create --role "Azure Kubernetes Service RBAC Cluster Admin" --scope <AKS cluster resource id> --assignee user@contoso.com
. Also make sure your users have the Azure Kubernetes Service Cluster User
built-in role to be able to do run az aks get-credentials
, and then get the kubeconfig of your AKS cluster using the az aks get-credentials
command.
Configure kubectl
to connect to your Kubernetes cluster using the az aks get-credentials command. This command downloads credentials and configures the Kubernetes CLI to use them.
az aks get-credentials --resource-group <resource-group> --name <cluster-name>
Verify the connection to your cluster using the kubectl get command. This command returns a list of the cluster nodes.
kubectl get nodes
The following sample output shows how you're asked to log in.
To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code AAAAAAAAA to authenticate.
After you log in, the following sample output shows the managed system node pools. Make sure the node status is Ready.
NAME STATUS ROLES AGE VERSION
aks-nodepool1-13213685-vmss000000 Ready agent 2m26s v1.28.5
aks-nodepool1-13213685-vmss000001 Ready agent 2m26s v1.28.5
aks-nodepool1-13213685-vmss000002 Ready agent 2m26s v1.28.5
Deploy the application
To deploy the application, you use a manifest file to create all the objects required to run the AKS Store application. A Kubernetes manifest file defines a cluster's desired state, such as which container images to run. The manifest includes the following Kubernetes deployments and services:
- Store front: Web application for customers to view products and place orders.
- Product service: Shows product information.
- Order service: Places orders.
- Rabbit MQ: Message queue for an order queue.
Note
We don't recommend running stateful containers, such as Rabbit MQ, without persistent storage for production. These containers are used here for simplicity, but we recommend using managed services, such as Azure Cosmos DB or Azure Service Bus.
Create a namespace
aks-store-demo
to deploy the Kubernetes resources into.kubectl create ns aks-store-demo
Deploy the application using the kubectl apply command into the
aks-store-demo
namespace. The YAML file defining the deployment is on GitHub.kubectl apply -n aks-store-demo -f https://raw.githubusercontent.com/Azure-Samples/aks-store-demo/main/aks-store-ingress-quickstart.yaml
The following sample output shows the deployments and services:
statefulset.apps/rabbitmq created configmap/rabbitmq-enabled-plugins created service/rabbitmq created deployment.apps/order-service created service/order-service created deployment.apps/product-service created service/product-service created deployment.apps/store-front created service/store-front created ingress/store-front created
Test the application
When the application runs, a Kubernetes service exposes the application front end to the internet. This process can take a few minutes to complete.
Check the status of the deployed pods using the kubectl get pods command. Make sure all pods are
Running
before proceeding. If this is the first workload you deploy, it may take a few minutes for node auto provisioning to create a node pool to run the pods.kubectl get pods -n aks-store-demo
Check for a public IP address for the store-front application. Monitor progress using the kubectl get service command with the
--watch
argument.kubectl get ingress store-front -n aks-store-demo --watch
The ADDRESS output for the
store-front
service initially shows empty:NAME CLASS HOSTS ADDRESS PORTS AGE store-front webapprouting.kubernetes.azure.com * 80 12m
Once the ADDRESS changes from blank to an actual public IP address, use
CTRL-C
to stop thekubectl
watch process.The following sample output shows a valid public IP address assigned to the service:
NAME CLASS HOSTS ADDRESS PORTS AGE store-front webapprouting.kubernetes.azure.com * 4.255.22.196 80 12m
Open a web browser to the external IP address of your ingress to see the Azure Store app in action.
Delete the cluster
If you don't plan on going through the AKS tutorial, clean up unnecessary resources to avoid Azure charges. Run the az group delete command to remove the resource group, container service, and all related resources.
az group delete --name <resource-group> --yes --no-wait
Note
The AKS cluster was created with a user-assigned managed identity. If you don't need that identity anymore, you can manually remove it.
Next steps
In this quickstart, you deployed a private Kubernetes cluster using AKS Automatic in a custom virtual network and then deployed a simple multi-container application to it. This sample application is for demo purposes only and doesn't represent all the best practices for Kubernetes applications. For guidance on creating full solutions with AKS for production, see AKS solution guidance.
To learn more about AKS Automatic, continue to the introduction.
Azure Kubernetes Service