You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
I am useing nginx ingress and CDR VirtualServer
I understand default upstream use pod IP not service IP for some reason
when I update upstream'pod, pod IP will change
so will trigger nginx reload
so if I have 100 pod,when rollingUpdate 1 by 1
nginx will reload 100 times
but current reload need consume large memory/cpu
so I add config use-cluster-ip: true
and going to nginx pods check config is right ( /etc/nginx/conf.d/vs_xxxx), only use service cluster IP
but when rollingUpdate pod 1 by 1
nginx still reload every time when pod change
I hope there should no need reload
To Reproduce
use VirtualServer with config use-cluster-ip: true
trigger pod rollingUpdate (update pod image)
check nginx log how many reconfiguring
Expected behavior
no need reload
Your environment
Version of the Ingress Controller : 3.4.0
Version of Kubernetes: 1.28
Kubernetes platform : EKS
Using NGINX
Additional context
does there have config can set reload behavior ?
like reload frequency,minimum reload interval .... or any can reduce reload frequency
The text was updated successfully, but these errors were encountered:
I understand default upstream use pod IP not service IP for some reason
This is true for this implementation and not un-common across other ingress controller implementations.
Using the EndpointSlices API to enumerate the pods of the backend service and configure those allows the ingress controller to manage the load balancing, perform splits, identify individual pod failures, steer traffic away from slow pods (and possibly overloaded nodes), support apps that need session persistence to a backend pod, etc.
If you have implemented the free version of this project, NGINX open source is used as the proxy is used and back-end service changes will result in a reload. There is batching to optimize this, but the time span matters. With the NGINX Plus implementation these backend changes are handled differently and configuration changes are applied without reloading.
During a reload normal NGINX behavior happens, where existing connections are maintained and allowed to end naturally, and new connections are made using the new configuration.
We apply no limits or timeouts to this.
so if I have 100 pod,when rollingUpdate 1 by 1, nginx will reload 100 times
but current reload need consume large memory/cpu
The amount of memory necessary depends on your traffic volume, traffic type, application behavior, etc. But yes, maintaining the existing connections does require additional resources.
We always advise customers to allow for spikes like this when setting limits.
but when rollingUpdate pod 1 by 1 nginx still reload every time when pod change
If use-cluster-ip is set, and for example only that one VirtualServer is defined the expectation should be that if the only change happening is the upstream group (the single backend service) then nginx should not be reloaded.
I am certain someone will be investigating the behavior you describe.
Describe the bug
I am useing nginx ingress and CDR VirtualServer
I understand default upstream use pod IP not service IP for some reason
when I update upstream'pod, pod IP will change
so will trigger nginx reload
so if I have 100 pod,when rollingUpdate 1 by 1
nginx will reload 100 times
but current reload need consume large memory/cpu
so I add config
use-cluster-ip: true
and going to nginx pods check config is right (
/etc/nginx/conf.d/vs_xxxx
), only use service cluster IPbut when rollingUpdate pod 1 by 1
nginx still reload every time when pod change
I hope there should no need reload
To Reproduce
use-cluster-ip: true
reconfiguring
Expected behavior
no need reload
Your environment
Additional context
does there have config can set reload behavior ?
like reload frequency,minimum reload interval .... or any can reduce reload frequency
The text was updated successfully, but these errors were encountered: