Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Node created in subnet with low number of IP adresses: failed to assign an IP address to container #2846

Closed
PeteMac88 opened this issue Mar 13, 2024 · 2 comments

Comments

@PeteMac88
Copy link

What happened:

Hey there,
we are currently having problems with the IP address allocation in our cluster. We initially created the EKS cluster with CIDR /16 and subnets with /24 settings. Our VPC still has enough IP addresses in the subnets but occasionally pods are stuck in creation because one of the subnet's ips is exhausted while the other subnets still have available IP addresses. We are already evaluating adding more subnets or an additional CIDR range but I don't understand why a node is created in the first place in a subnet that has a low number of available IP addresses and does not prefer the other subnets to distribute the IP allocation better.

How is the scheduling of nodes in subnets determined and can we configure it to check the amount of available IP addresses before creation? I only found one blog post describing our problem, all other posts are about the general problem of exhausted subnets (https://edwin-philip.medium.com/aws-eks-subnet-insufficient-ip-address-d0855154c596).

Does anyone has some advice on how to handle this problem or came across a good workaround?

Environment:

  • Kubernetes version (use kubectl version): 1.25
  • CNI Version: v1.16.3
@orsenthil
Copy link
Member

Hello @PeteMac88 , this depends upon how you setup your EKS Cluster.

https://docs.aws.amazon.com/eks/latest/userguide/creating-a-vpc.html

The default VPC and Subnet creation templates provided for your EKS Cluster assign sufficient number of IP addresses in each subnet (around 16k)

image


But the maximum number of pods that can be launched on a node depends on the max pods limit supported by the instance type, which is detailed here - https://github.com/awslabs/amazon-eks-ami/blob/master/files/eni-max-pods.txt

You might be running into this.

And in terms of organizing your work load, with various features provided by VPC CNI, you can refer to the EKS best practices guide that will provide suggestions based on your use case : https://aws.github.io/aws-eks-best-practices/networking/vpc-cni/

Let us know if this helps.

Copy link

This issue is now closed. Comments on closed issues are hard for our team to see.
If you need more assistance, please either tag a team member or open a new issue that references this one.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants