AKS enabled by Azure Arc IP address planning requirements
Applies to: Azure Local, version 23H2
IP address planning for AKS involves designing a network that supports applications, node pools, pod networks, service communication, and external access. This article walks you through some key considerations for effective IP address planning, and minimum number of IP addresses required to deploy AKS in production. See the AKS networking concepts and requirements before reading this article.
Simple IP address planning for Kubernetes clusters and applications
In the following scenario walk-through, you reserve IP addresses from a single network for your Kubernetes clusters and services. This example is the most straightforward and simple scenario for IP address assignment.
IP address requirement | Minimum number of IP addresses | How and where to make this reservation |
---|---|---|
AKS Arc VM IPs | Reserve one IP address for every worker node in your Kubernetes cluster. For example, if you want to create 3 node pools with 3 nodes in each node pool, you need to have 9 IP addresses in your IP pool. | Reserve IP addresses for AKS Arc VMs through IP pools in Arc VM logical network. |
AKS Arc K8s version upgrade IPs | Because AKS Arc performs rolling upgrades, reserve one IP address for every AKS Arc cluster for Kubernetes version upgrade operations. | Reserve IP addresses for K8s version upgrade operations through IP pools in Arc VM logical networks. |
Control plane IP | Reserve one IP address for every Kubernetes cluster in your environment. For example, if you want to create 5 clusters in total, reserve 5 IP addresses, one for each Kubernetes cluster. | Reserve IP addresses for control plane IPs in the same subnet as the Arc VM logical network, but outside the specified IP pool. |
Load balancer IPs | The number of IP addresses reserved depends on your application deployment model. As a starting point, you can reserve one IP address for every Kubernetes service. | Reserve IP addresses for control plane IPs in the same subnet as the Arc VM logical network, but outside the specified IP pool. |
Example walkthrough for IP address reservation for Kubernetes clusters and applications
Jane is an IT administrator just starting with AKS enabled by Azure Arc. Jane wants to deploy two Kubernetes clusters: Kubernetes cluster A and Kubernetes cluster B on the Azure Local cluster. Jane also wants to run a voting application on top of cluster A. This application has three instances of the front-end UI running across the two clusters and one instance of the backend database. All the AKS clusters and services are running in a single network, with a single subnet.
- Kubernetes cluster A has 3 control plane nodes and 5 worker nodes.
- Kubernetes cluster B has 1 control plane node and 3 worker nodes.
- 3 instances of the front-end UI (port 443).
- 1 instance of the backend database (port 80).
Based on the previous table, Jane must reserve a total of 19 IP addresses in Jane's subnet:
- 8 IP addresses for the AKS Arc node VMs in cluster A (one IP per K8s node VM).
- 4 IP addresses for the AKS Arc node VMs in cluster B (one IP per K8s node VM).
- 2 IP addresses for running AKS Arc upgrade operation (one IP address per AKS Arc cluster).
- 2 IP addresses for the AKS Arc control plane (one IP address per AKS Arc cluster)
- 3 IP addresses for the Kubernetes service (one IP address per instance of the front-end UI, since they all use the same port. The backend database can use any one of the three IP addresses as long as it uses a different port).
Continuing with this example, and adding it to the following table, you get:
Parameter | Number of IP addresses | How and where to make this reservation |
---|---|---|
AKS Arc VMs and K8s version upgrade | Reserve 14 IP addresses | Make this reservation through IP pools in the Azure Local logical network. |
Control plane IP | Reserve 2 IP addresses, one for AKS Arc cluster | Use the controlPlaneIP parameter to pass the IP address for control plane IP. Ensure that this IP is in the same subnet as the Arc logical network, but outside the IP pool defined in the Arc logical network. |
Load balancer IPs | 3 IP address for Kubernetes services, for Jane's voting application. | These IP addresses are used when you install a load balancer on cluster A. You can use the MetalLB Arc extension, or bring your own 3rd party load balancer. Ensure that this IP is in the same subnet as the Arc logical network, but outside the IP pool defined in the Arc VM logical network. |
LNETs considerations for AKS clusters and Arc VMs
Logical networks on Azure Local are used by both AKS clusters and Arc VMs. You can configure logical networks in one of the following 2 ways:
- Share a logical network between AKS and Arc VMs.
- Define separate logical networks for AKS clusters and Arc VMs.
Sharing a logical network between AKS and Arc VMs on Azure Local offers the benefit of streamlined communication, cost savings, and simplified network management. However, this approach also introduces potential challenges such as resource contention, security risks, and complexity in troubleshooting.
Criteria | Sharing a logical network | Defining separate logical networks |
---|---|---|
Configuration complexity | Simpler configuration with a single network, reducing setup complexity. | More complex setup, as you need to configure multiple logical networks for VMs and AKS clusters. |
Scalability | Potential scalability limitations as both Arc VMs and AKS clusters share network resources. | More scalable since network resources are separated and can scale independently. |
Network policy management | Easier to manage with one set of network policies, but harder to isolate workloads. | Easier to isolate workloads, as separate policies can be applied per logical network. |
Security considerations | Increased risk of cross-communication vulnerabilities if not properly segmented. | Better security as each network can be segmented and isolated more strictly. |
Impact of network failures | A failure in the shared network can affect both AKS and Arc VMs simultaneously. | A failure in one network affects only the workloads within that network, reducing overall risk. |
IP address range allocation for pod CIDR and service CIDR
Pod network CIDR
Pod network CIDR is a range of IP addresses used by Kubernetes to assign unique IP addresses to the individual pods running within a Kubernetes cluster. Each pod gets its own IP address within this range, allowing pods to communicate with each other and with services within the cluster. In AKS, pod IP addresses are assigned via Calico CNI in VXLAN mode. Calico VXLAN helps create Overlay networks, where the IP addresses of pods (from the pod network CIDR) are virtualized and tunneled through the physical network. In this mode, each pod is assigned an IP address from the pod network CIDR, but this IP address is not directly routable on the physical network. Instead, it is encapsulated within the network packets and sent through the underlying physical network to reach its destination pod on another node.
AKS provides a default value of 10.244.0.0/16 for the pod network CIDR. AKS does support customizations for the pod network CIDR. You can set your own value using the --pod-cidr
parameter when creating the AKS cluster. Ensure that the CIDR IP range is large enough to accommodate the maximum number of pods per node and across the Kubernetes cluster.
Service network CIDR
The Service network CIDR is the range of IP addresses reserved for Kubernetes services like LoadBalancers, ClusterIP, and NodePort within a cluster. Kubernetes supports the following service types:
- ClusterIP: The default service type, which exposes the service within the cluster. The IP assigned from the Service network CIDR is only accessible within the Kubernetes cluster.
- NodePort: Exposes the service on a specific port on each node’s IP address. The ClusterIP is still used internally, but external access is through the node IPs and a specific port.
- LoadBalancer: This type creates a cloud-provider-managed load balancer and exposes the service externally. The cloud provider typically manages the external IP assignment, while the internal ClusterIP remains within the service network CIDR.
AKS provides a default value of 10.96.0.0/12 for the service network CIDR. AKS does not support customizations for the service network CIDR today.
Next steps
Create logical networks for Kubernetes clusters on Azure Local, version 23H2