You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
We should be able to scale NGINX Ingress Controller in a very large cluster with minimal Kubernetes API calls.
This particular problem comes from having multiple NIC deployments in a single cluster combined with the new K8s pattern of the leader API.
Describe the solution you'd like
Minimize impact on the K8s API to maintain leader for a NIC deployment.
Describe alternatives you've considered
Remove the historic configmap leader maintenance code
Releases 3.2 and 3.3 performed both configmap and leader API calls. It is believed this was left for compatibility at the time and no negative reports had been received.
Change the interval that the Kubernetes API is called to check the leader/during the leader election.
while exposing knobs to tune leader behavior is possible, at scale it is determined that to get the leader API activity low enough any benefits from leader being the single pod to report configuration state would be negated due to the reported state being potentially lost for an extended period of time if a leader is lost.
Consider an external data store
the project does not want to build any dependency on any additional or external component only for leader state maintenance. This not only adds complexity to the system but expands the potential for failure.
Invent our own solution which maintains reasonably accurate state for leader
while it is believed that K8s needs to provide a long term solution to this problem, we cannot wait for that solution to be delivered.
Additional context
This is a problem that is being discussed in the larger K8s community where it has also been identified:
Is your feature request related to a problem? Please describe.
We should be able to scale NGINX Ingress Controller in a very large cluster with minimal Kubernetes API calls.
This particular problem comes from having multiple NIC deployments in a single cluster combined with the new K8s pattern of the leader API.
Describe the solution you'd like
Minimize impact on the K8s API to maintain leader for a NIC deployment.
Describe alternatives you've considered
Remove the historic configmap leader maintenance code
Change the interval that the Kubernetes API is called to check the leader/during the leader election.
Consider an external data store
Invent our own solution which maintains reasonably accurate state for leader
Additional context
This is a problem that is being discussed in the larger K8s community where it has also been identified:
The text was updated successfully, but these errors were encountered: