RBAC Permission Failures on Existing Cluster, replicasets and nodes? #5840
-
Came back from a the weekend today and we're seeing a chunk of errors in our ingresscontroller, which is no longer processing traffic. This isn't all of them, but they're all looking for replicaset and node rbac access. EX:
We've got this same deployment mapped across eight environments but this is the only one failing, and after looking all day, we're at a loss. We can see in the RBAC yaml in the repo that those permissions aren't deployed with the cluster, and as far as we know they aren't (shouldn't be?) required. Stack is Azure Kubernetes, local accounts with Kubernetes RBAC. As far as we can tell from our logs, nothing changed. We've redeployed the controller a few times but have been met with no success. Does anyone in the community have any ideas here that might point us in the right direction? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 6 replies
-
Hi @jordanabakerafs, you can opt-out from using telemetry. NGINX Ingress Controller docs provide information how to do it. In the NIC v3.6.0 (release date June 25th) permission (RBAC) issues won't be reported as errors. cc / @shaun-nx |
Beta Was this translation helpful? Give feedback.
Just noticed we need to update this argument in the docs from
.telemetry
to.telemetryReporting
But should be possible to find in the values.yaml filecontroller.telemetryReporting.enable=false