This is the only error i see in the events of the nodes
ContainerRuntimeIsDown] crictl -t 60s pods --latest failed!
component: container-runtime-custom-plugin-monitor
host: aks-nodepool1-89500097-0
type: Warning
This browser is no longer supported.
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.
Kubernetes Cluster cannot issue new pods. All currently issued ones are active, but as soon as you try to restart a pod or create a new one its stuck in pending mode.No Recent events events or conditions in the pod/deployment details.
All nodes appears as active with no taints and not resource locked.
Only error i find is in the events and it states
|ContainerRuntimeProblem True 12 days ago crictl -t 60s pods --latest failed!||
| -------- | -------- |
This is the only error i see in the events of the nodes
ContainerRuntimeIsDown] crictl -t 60s pods --latest failed!
component: container-runtime-custom-plugin-monitor
host: aks-nodepool1-89500097-0
type: Warning
Hello Velizar Danev
Thank you for reach out to us!
Could you please check the events for the nodes, using kubectl describe nodes <node-name>
https://kubernetes.io/docs/reference/kubectl/generated/kubectl_describe/
Try to search for the Node where the pods should be scheduled.
For the pods in pending, please use kubectl describe pod <pod-name>
and check tha status/events inside the pod.
If possible create a new nodepool, and move your workload for the new nodepool, observe also if you have a Vnet/Subnet with a size that accomodate the number of IP's needed.
https://learn.microsoft.com/en-us/azure/aks/concepts-network-ip-address-planning
Also share detaisl about your AKS cluster, such as version, size, and so on.
If the answer has been helpful, we appreciate hearing from you and would love to help others who may have the same question.
Accepting answers helps increase visibility of this question for other members of the Microsoft Q&A community.
Thank you for helping to improve Microsoft Q&A!
We have 2 clusters on AKS (v1.18.14) and we are facing the same issue since 5th March 2025. The scheduler seems to be in a unhealthy state due to which pods are not allocated to node. This can be verified by running below command -
kubectl get componentstatuses
As a temporary solution, we have specified 'nodeName' in our deployment yamls since this affects our production.