-
Notifications
You must be signed in to change notification settings - Fork 304
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support bypassing cluster local kubeconfig in favor of explicit kubeconfig #388
Comments
I need more details here to better think about this. I use kubespawner from a jupyterhub living in a container in a pod in the k8s it control. I do it as part of the zero-to-jupyterhub-k8/s project. This jupyterhub pod has the rights to work on with the k8s api-server in the cluster by the RBAC details configured for the pod's service account. Please describe your use case a bit @xrmzju, that is essential for work to be done towards it. |
sry i just opened the issue for a mark yesterday. in my case, i need to deploy jupyterhub in cluster A and create notebook pod in cluster B, so kubespawner need to support load external cluster config(in my case, kubeconfig), but for now, it uses the default in cluster client. |
@xrmzju What happens when you specify a kubeconfig at the moment? kubespawner/kubespawner/reflector.py Lines 125 to 132 in 37a80ab
implies it should already work. |
i made it work by pass api_client |
Update by @consideRatio
If KubeSpawner is to do work in another cluster, it needs a way to get passed the credentials to speak with that k8s clusters api-server. Currently our logic doesn't support this.
kubespawner/kubespawner/reflector.py
Lines 125 to 132 in 37a80ab
Original issue
(blank)
The text was updated successfully, but these errors were encountered: