-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add the possibility for the controller to be in a shared namespace #4558
Comments
Hi @panzouh thanks for reporting! Be sure to check out the docs and the Contributing Guidelines while you wait for a human to take a look at this 🙂 Cheers! |
Hey @panzouh , reading the description it seem you're interested in directly communicating with nginx from outside the IC so Is Ingress Controller the right project for this purpose? |
Hello, my problem concerns indeed communication with the nginx process, but this is made impossible by the values in the chart and in templates. Although nginx could expose a route or a socket to force a reload, I think it has everything already built in to do what I need https://nginx.org/en/docs/control.html. I also think that it is simplier to patch a few lines in a value file than changing nginx behavior. |
I have been watching this and keep going back to this comment:
In the end I think everyone would be best served by supporting multiple config maps. In the end we want to deliver something that stays true to a native K8s integration and keeps the K8s API the source of truth and thus any CI/CD tooling that drives configuration using the K8s API. And not providing strong conduits to work around the way the system is implemented. Curious what others think. |
I do agree, It is indeed a better direction considering the K8s integration, although I did not look much in the code to evaluate the cost of this feature instead of adding a couple keys in the value file and templates. |
I watched a little bit the code, could these snippets do the trick ? // main.go
func processConfigMaps(kubeClient *kubernetes.Clientset, cfgParams *configs.ConfigParams, nginxManager nginx.Manager, templateExecutor *version1.TemplateExecutor) *configs.ConfigParams {
var aggregatedParams *configs.ConfigParams
if *nginxConfigMaps != "" {
nginxConfigMapArr := strings.Split(*nginxConfigMaps, ",")
for _, nginxConfigMap := range nginxConfigMapArr {
ns, name, err := k8s.ParseNamespaceName(nginxConfigMap)
if err != nil {
glog.Fatalf("Error parsing the nginx-configmaps argument: %v", err)
}
cfm, err := kubeClient.CoreV1().ConfigMaps(ns).Get(context.TODO(), name, meta_v1.GetOptions{})
if err != nil {
glog.Fatalf("Error when getting %v: %v", *cfm, err)
}
cfgParams = configs.ParseConfigMap(cfm, *nginxPlus, *appProtect, *appProtectDos, *enableTLSPassthrough)
if cfgParams.MainServerSSLDHParamFileContent != nil {
fileName, err := nginxManager.CreateDHParam(*cfgParams.MainServerSSLDHParamFileContent)
if err != nil {
glog.Fatalf("Configmap %s/%s: Could not update dhparams: %v", ns, name, err)
} else {
cfgParams.MainServerSSLDHParam = fileName
}
}
if cfgParams.MainTemplate != nil {
err = templateExecutor.UpdateMainTemplate(cfgParams.MainTemplate)
if err != nil {
glog.Fatalf("Error updating NGINX main template: %v", err)
}
}
if cfgParams.IngressTemplate != nil {
err = templateExecutor.UpdateIngressTemplate(cfgParams.IngressTemplate)
if err != nil {
glog.Fatalf("Error updating ingress template: %v", err)
}
}
// Merge the config params.
if aggregatedParams == nil {
aggregatedParams = cfgParams
} else {
aggregatedParams = configs.MergeConfigParams(aggregatedParams, cfgParams)
}
}
}
return aggregatedParams
} //configs/configmaps.go
func MergeConfigParams(target, source *ConfigParams) *ConfigParams {
if target == nil {
// If the target is nil, create a new instance to avoid modifying the input
target = &ConfigParams{}
}
// Merge individual fields
if source.HTTP2 {
target.HTTP2 = source.HTTP2
}
// [...]
return target
} |
@panzouh you can push custom config via means of configmap as well, see https://github.com/nginxinc/kubernetes-ingress/tree/v3.3.1/examples/shared-examples/custom-templates#example. |
I don't think that creating a fully customized configuration for a basic http block is the correct way of dealing with the problem i rather push another configmap via the configmap flag |
Can you clarify it a bit more here, is the goal to modify nginx.conf using another confingmap? |
Sure, I have a configuration which is dynamically pulled from an API and built to be a http block saved in an other Nginx configuration, and in Nginx default configmap I include the folder where the file is. |
👋 @shaun-nx When my change will be released through the helm repository ? |
What is in main will go out with the next release. |
Hi @panzouh let us know if the change worked for you. If so we can close this issue 😄 |
Hi yes I had to clone the repo to try it out but yes adding sharedProcessNamespace solved my issue. I can't wait to see the next release ! 😉 |
Is your feature request related to a problem? Please describe.
I would like to use nginx plugged with a sidecar, the purpose of the sidecar is to pull a configuration from an API and then pushing it to Nginx via a shared volume. My problem is that when the configuration is reloaded i would like to send a SIGHUP signal to nginx process to gracefully shutdown and reload configuration.
Describe the solution you'd like
The solution would be simple it would be to add this key like in the
values.yaml
:And this in
./templates/deployment.yaml
&./templates/daemonset.yaml
:Describe alternatives you've considered
I considered for a moment to add a cronjob to my cluster that fetch configuration and
kubectl exec
into Nginx to reload configuration but for me it is not the proper way to do it because it does not take in consideration if the configuration changed or not.Edit : I also considered to support multiple configmaps since the flag
nginxConfigMaps
is named with a plural but support only one configuration.The text was updated successfully, but these errors were encountered: