Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

unnecessary reload when pod update #4945

Closed
owanio1992 opened this issue Jan 19, 2024 · 2 comments · Fixed by #5318
Closed

unnecessary reload when pod update #4945

owanio1992 opened this issue Jan 19, 2024 · 2 comments · Fixed by #5318
Assignees
Labels
backlog Pull requests/issues that are backlog items bug An issue reporting a potential bug
Milestone

Comments

@owanio1992
Copy link

owanio1992 commented Jan 19, 2024

Describe the bug
I am useing nginx ingress and CDR VirtualServer
I understand default upstream use pod IP not service IP for some reason
when I update upstream'pod, pod IP will change
so will trigger nginx reload

so if I have 100 pod,when rollingUpdate 1 by 1
nginx will reload 100 times
but current reload need consume large memory/cpu
so I add config use-cluster-ip: true
and going to nginx pods check config is right ( /etc/nginx/conf.d/vs_xxxx), only use service cluster IP

but when rollingUpdate pod 1 by 1
nginx still reload every time when pod change
I hope there should no need reload

To Reproduce

  1. use VirtualServer with config use-cluster-ip: true
  2. trigger pod rollingUpdate (update pod image)
  3. check nginx log how many reconfiguring

Expected behavior
no need reload

Your environment

  • Version of the Ingress Controller : 3.4.0
  • Version of Kubernetes: 1.28
  • Kubernetes platform : EKS
  • Using NGINX

Additional context
does there have config can set reload behavior ?
like reload frequency,minimum reload interval .... or any can reduce reload frequency

Copy link

Hi @owanio1992 thanks for reporting!

Be sure to check out the docs and the Contributing Guidelines while you wait for a human to take a look at this 🙂

Cheers!

@brianehlert
Copy link
Collaborator

I understand default upstream use pod IP not service IP for some reason

This is true for this implementation and not un-common across other ingress controller implementations.
Using the EndpointSlices API to enumerate the pods of the backend service and configure those allows the ingress controller to manage the load balancing, perform splits, identify individual pod failures, steer traffic away from slow pods (and possibly overloaded nodes), support apps that need session persistence to a backend pod, etc.

If you have implemented the free version of this project, NGINX open source is used as the proxy is used and back-end service changes will result in a reload. There is batching to optimize this, but the time span matters. With the NGINX Plus implementation these backend changes are handled differently and configuration changes are applied without reloading.
During a reload normal NGINX behavior happens, where existing connections are maintained and allowed to end naturally, and new connections are made using the new configuration.
We apply no limits or timeouts to this.

so if I have 100 pod,when rollingUpdate 1 by 1, nginx will reload 100 times
but current reload need consume large memory/cpu

The amount of memory necessary depends on your traffic volume, traffic type, application behavior, etc. But yes, maintaining the existing connections does require additional resources.
We always advise customers to allow for spikes like this when setting limits.

but when rollingUpdate pod 1 by 1 nginx still reload every time when pod change

If use-cluster-ip is set, and for example only that one VirtualServer is defined the expectation should be that if the only change happening is the upstream group (the single backend service) then nginx should not be reloaded.

I am certain someone will be investigating the behavior you describe.

@danielnginx danielnginx added bug An issue reporting a potential bug backlog Pull requests/issues that are backlog items labels Jan 29, 2024
@j1m-ryan j1m-ryan self-assigned this Mar 21, 2024
@danielnginx danielnginx removed the bug An issue reporting a potential bug label Apr 4, 2024
@danielnginx danielnginx added this to the v3.6.0 milestone Apr 4, 2024
@danielnginx danielnginx added the bug An issue reporting a potential bug label Apr 4, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backlog Pull requests/issues that are backlog items bug An issue reporting a potential bug
Projects
None yet
4 participants