-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Receive Upstream Timeout from IP which not present on k8s #5445
Comments
Hi @ibadullaev-inc4 thanks for reporting! Be sure to check out the docs and the Contributing Guidelines while you wait for a human to take a look at this 🙂 Cheers! |
NGINX Ingress Controller configures upstreams using endpointSlices and only those endpoints that also are 'ready'. Can you help me understand your scenario a bit deeper? If it is a timing issue we recommend using a healthcheck |
Hi, thank you for your response
|
Hi @brianehlert Than you for your previous response Is it not possible to add health check if I don't use Nginx Plus ?
|
Passive health checks are always present. But Active health checks are a capability that is specific to NGINX Plus. By default, NGINX Ingress Controller won't add pods to the service upstream group until the pod reports ready. |
Hello, Yes my deployment is configure with live and read probe
|
The deployment doesn't give us much information to assist with. If a pod of a service no longer exists, it should be removed from the ingress controller upstream group for that service. |
Hi, thank you for fast response
Follow manifest related our service
|
Hi all, Do I need to provide anything else ? Some manifests I added in the first comments. As I mentioned earlier, nginx is trying to send requests to IP addresses of pods that are no longer alive. |
@ibadullaev-inc4 are you able to confirm if this occurs only during a rolling upgrade of the Note, you have |
@ibadullaev-inc4 before sending requests to the service, did the backend service deployment complete and the nginx reload finish? |
Hello @pdabelf5
When I say that we are restarting the service, I mean we are deploying a new version of our service. After we update our deployment, our pods sequentially terminate and new ones appear, which have the new image version.
I didn't understand this part of the question: we don't restart nginx when we deploy a new version of our backend.
Yes, we observe this issue only in this case.
Do you mean that we should switch this parameter to True? Note: When I manually delete a pod using the command "kubectl delete pods backend-xxxxxxx" which has the IP, for example: X.X.X.X, I see that nginx removes this IP address from its upstream configuration. This means nginx behaves correctly and stops passive monitoring for this IP. But when updating the backend service, nginx most likely does not remove the old IPs from its configuration and continues to send traffic to them. |
Hello @ibadullaev-inc4 In order for
to take effect,
Nginx should reload when the upstream pod ip's in the backend service are updated & the upstream config should contain the current list of pod ip's. I am trying to replicate the issue you are seeing. Is the configuration you have provided the smallest example configuration that results in the problem you have encountered? Is there anything I might need when replicating the issue, i.e. websockets or grpc? The example you provided suggests you are using 3.4.0, have you tried 3.4.3 or 3.5.1 and seen the same issue? |
Hello @pdabelf5
Yes, we forgot to enable this setting. Thank you for your attention.
Oh, that's very strange. Are you sure I need to restart nginx every time I update my backend service (deployment)? Currently, we are using Istio and we don't restart it when deploying a new version.
"We also use gRPC and WebSockets in our configuration, but I'm not sure if they could influence or be the source of the problem. I think you can try without them, as we see that the timeout issue is related to the HTTP protocol."
We only tried version 3.4.0. |
Apologies if I implied the nginx reload was something you needed to perform, it should happen automatically when nginx config is updated by the controller. I will let you know how things go trying to replicate the issue as soon as I have something to tell. |
Can you expand on how you are using Istio in your setup? Is it separate? Is it a sidecar within NGINX Ingress controller? |
@ibadullaev-inc4 I put together a test that deploys a test application, updates the test application whilst performing requests while it is updating. I'm afraid I wasn't able to reproduce the problem you have encountered. |
Hi @ibadullaev-inc4 are you still encountering this issue? |
Describe the bug
Hi, we are using VirtualServer CRD for configure route on k8s to send traffic to the upstream backend
After restart upstream server (backend) which has temproary IP address, our nginx ingress continue send traffic to the IP address which is not present yet.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
If uptream IP address not present into endpoints why nginx try to send traffic to non-existent IP
Your environment
DigitalOcean
NGINX
Additional context
Add any other context about the problem here. Any log files you want to share.
Config inside ingress controller
The text was updated successfully, but these errors were encountered: