Will AKS cluster autoscaler terminate my running pods when node scale down happens

Arsalan Haroon Sheikh 20 Reputation points
2024-12-23T21:05:04.0433333+00:00

We have a scenario where we have a service bus that will hold thousands of messages at a time. Each message will have a link to a pdf file. Our task is to extract the content of each pdf file and to store it as JSON in an azure blob storage container. An average pdf file will be of thousands of pages, and as per our testing, it will take around half an hour to 45 minutes to process a file

We are using AKS to get this job done. The cluster will contain multiple pods and each pod will process the messages in the service bus. We will use a combination of HPA and cluster auto scaler to scale up our solution. We have a timebound SLA with our customers. Therefore, we want to process as many messages as we can parallelly

While reading about the cluster auto scaler, I came to know that if the resource utilization is low on a node, the cluster auto scaler will automatically terminate that node and respawn its pods on a different existing node. Is my understanding correct ? If it is, how can we stop it from terminating a node until the CPU utilization of the pods goes down to a certain threshold lets say 30% ( Our process is CPU intensive so a pod while processing will not slide down below a certain level )

This is absolutely important because if the auto-scaler terminates a pod midway while servicing a pdf file, we will loose all the processing it has done in the meanwhile, and the respawned pod will have to start over again. We don't have a mechanism to manage states in the pod. When processing of a file completes only then we are saving it to blob container.

Is my understanding of auto-scaler's behavior correct? I have based my understanding on from the following resources

User's image

https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md

enter image description here

https://learn.microsoft.com/en-us/training/modules/configure-scaling-azure-kubernetes-service/3-cluster-autoscaler

This question on stack overflow what will happen when codes in pod is running while hpa scale down?

And this one How to avoid the last pod being killed on automatic node scale down in AKS

Azure Kubernetes Service (AKS)
Azure Kubernetes Service (AKS)
An Azure service that provides serverless Kubernetes, an integrated continuous integration and continuous delivery experience, and enterprise-grade security and governance.
2,206 questions
0 comments No comments
{count} votes

Accepted answer
  1. Andriy Bilous 11,616 Reputation points MVP
    2024-12-23T22:14:42.3266667+00:00

    Hello Arsalan Haroon Sheikh

    Your understanding of AKS cluster autoscaler is correct.

    As you are using Deployment, it is backed by Replicaset. As per this controller code there exist function "getPodsToDelete". In combination with "filteredPods" it gives the result: "This ensures that we delete pods in the earlier stages whenever possible."

    So as proof of concept:

    You can create deployment with init container. Init container should check if there is a message in the queue and exit when at least one message appears. This will allow main container to start, take and process that message. In this case we will have two kinds of pods - those which process the message and consume CPU and those who are in the starting state, idle and waiting for the next message. In this case starting containers will be deleted at the first place when HPA decide to decrease number of replicas in the deployment.

    https://stackoverflow.com/questions/56552175/is-there-a-way-to-downscale-pods-only-when-message-is-processed-the-pod-finishe

    I recommend to take a look into KEDA
    KEDA allows for fine grained autoscaling (including to/from zero) for event driven Kubernetes workloads. KEDA serves as a Kubernetes Metrics Server and allows users to define autoscaling rules using a dedicated Kubernetes custom resource definition. KEDA can run on both the cloud and the edge, integrates natively with Kubernetes components such as the Horizontal Pod Autoscaler, and has no external dependencies.
    https://github.com/kedacore/keda

    You need to use Fault Tolerant message processing architecture pattern to avoid disruption.
    https://www.youtube.com/watch?v=XndpZCyRIXw


0 additional answers

Sort by: Most helpful

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.