Can't route to AKS nodes through a global load balancer

Jonah Pemstein 0 Reputation points
2024-12-19T17:46:47.37+00:00

I have an AKS cluster with a managed load balancer in front of it, automatically provisioned by my LoadBalancer service, and associated with a public IP address. The node pools belong to a subnet I created with an associated NSG. Everything works great.

I created a global load balancer with a separate public IP address that has this AKS load balancer as a backend. But I can't route traffic through this IP address to my cluster, despite all the health checks for both load balancers succeeding. (TCP connections time out.)

I can route traffic through the same global load balancer, and same AKS load balancer, to a one-off VM I created in the same virtual network and subnet. So clearly there's nothing inherently wrong with either load balancer or the vnet/subnet/NSG/my connection.

Also, I had previously tried routing to the same node pool from a separate (regional) load balancer I had manually created to a NodePort service in the cluster, which also did not work (again despite the health checks, pointing at the same port as the target backend port, showing healthy backends).

So my question is, is there something special about the managed load balancers that allows them to route traffic to node pools in AKS clusters? Why would routing through a global load balancer upstream interfere with that? Or is there some setting I'm potentially missing somewhere, perhaps in the kubernetes cluster?

I found this page: https://learn.microsoft.com/en-us/troubleshoot/azure/azure-kubernetes/connectivity/connection-issues-application-hosted-aks-cluster. It says:

If you receive a Connection Timed Out error message, check the network security group that's associated with the AKS nodes. Also, check the AKS subnet. It could be blocking the traffic from the load balancer or application gateway to the AKS nodes.

But it doesn't say what settings exactly to check, or why it would be blocking traffic to the AKS nodes but not the VM.

Failing any of that, is there a way to inspect traffic logs for the load balancer in some way that can tell me where exactly the connection is stalling out?

Azure Kubernetes Service (AKS)
Azure Kubernetes Service (AKS)
An Azure service that provides serverless Kubernetes, an integrated continuous integration and continuous delivery experience, and enterprise-grade security and governance.
2,205 questions
{count} votes

1 answer

Sort by: Most helpful
  1. anashetty 1,145 Reputation points Microsoft Vendor
    2024-12-20T06:52:36.5833333+00:00

    Hi Jonah Pemstein,Welcome to the Microsoft Q&A Platform! Thank you for asking your question here.

    Once check the NSG rules they might block traffic between the global load balancer and the AKS nodes, even sometimes if the health checks pass. Allow traffic from the global load balancer's IP address to the AKS nodes. List the NSG rule exists and check the inbound rule exists where source is Global Load balancer and destination is AKS subnet with TCP port. For detailed information please check A custom network security group blocks traffic

    Check that the backend pool contains the AKS nodes and frontend IP configuration is correct which matches the Public IP you are using. For more information, please check Troubleshoot Azure Load Balancer backend traffic responses

    Check the logs on both AKS managed load balancer and Global Load balancer and also check AKS logs by using kubectl logs <pod-name> -n <namespace>

    If you have any further queries, please do let me know.

    0 comments No comments

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.