Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add ingress #607

Merged
merged 1 commit into from
Sep 19, 2024
Merged

Add ingress #607

merged 1 commit into from
Sep 19, 2024

Conversation

m3dwards
Copy link
Collaborator

@m3dwards m3dwards commented Sep 17, 2024

To test:

warnet deploy ./resources/networks/6_node_bitcoin
warnet dashboard

Tested on Minikube, Docker Desktop and Google so especially looking for Digital Ocean users.

@m3dwards m3dwards marked this pull request as draft September 17, 2024 18:32
@m3dwards m3dwards force-pushed the ingress branch 22 times, most recently from 56a5e87 to 22f9a42 Compare September 18, 2024 15:23
@m3dwards m3dwards marked this pull request as ready for review September 18, 2024 15:33
@pinheadmz
Copy link
Contributor

pinheadmz commented Sep 18, 2024

on digital ocean:


(.venv) --> warnet dashboard
Error getting ingress IP: 'NoneType' object is not subscriptable
Error: Could not get the IP address of the dashboard
If you are running Minikube please run 'minikube tunnel' in a separate terminal

ingress is running in the cluster:

-------------------------------------------------------------------------------
NGINX Ingress controller
  Release:       v1.11.2
  Build:         46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
  Repository:    https://github.com/kubernetes/ingress-nginx
  nginx version: nginx/1.25.5

-------------------------------------------------------------------------------

W0918 18:43:08.397733       7 client_config.go:659] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0918 18:43:08.397978       7 main.go:205] "Creating API client" host="https://10.245.0.1:443"
I0918 18:43:08.408441       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.1" state="clean" commit="948afe5ca072329a73c8e79ed5938717a5cb3d21" platform="linux/amd64"
I0918 18:43:08.783390       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
I0918 18:43:08.806629       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
I0918 18:43:08.820812       7 nginx.go:271] "Starting NGINX Ingress controller"
I0918 18:43:08.847024       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress", Name:"ingress-nginx-controller", UID:"451d1690-f029-4cbe-80af-a13002d0f439", APIVersion:"v1", ResourceVersion:"1481707", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress/ingress-nginx-controller
I0918 18:43:10.023039       7 nginx.go:317] "Starting NGINX process"
I0918 18:43:10.023135       7 leaderelection.go:250] attempting to acquire leader lease ingress/ingress-nginx-leader...
I0918 18:43:10.023832       7 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
I0918 18:43:10.023986       7 controller.go:193] "Configuration changes detected, backend reload required"
I0918 18:43:10.034971       7 leaderelection.go:260] successfully acquired lease ingress/ingress-nginx-leader
I0918 18:43:10.035141       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-9756f5bd9-m8hnf"
I0918 18:43:10.070594       7 controller.go:213] "Backend successfully reloaded"
I0918 18:43:10.070676       7 controller.go:224] "Initial sync, sleeping for 1 second"
I0918 18:43:10.070796       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress", Name:"ingress-nginx-controller-9756f5bd9-m8hnf", UID:"11e0e27c-4a6d-4a63-b366-14d4185414b7", APIVersion:"v1", ResourceVersion:"1481808", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
W0918 18:43:23.487997       7 controller.go:1216] Service "warnet-logging/caddy" does not have any active Endpoint.
I0918 18:43:23.515028       7 main.go:107] "successfully validated configuration, accepting" ingress="warnet-logging/caddy-ingress"
I0918 18:43:23.523686       7 store.go:440] "Found valid IngressClass" ingress="warnet-logging/caddy-ingress" ingressclass="nginx"
I0918 18:43:23.523892       7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"warnet-logging", Name:"caddy-ingress", UID:"744e87a7-fc7c-40a6-a0f2-5671669ec0c5", APIVersion:"networking.k8s.io/v1", ResourceVersion:"1481925", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I0918 18:43:25.162145       7 controller.go:193] "Configuration changes detected, backend reload required"
I0918 18:43:25.215936       7 controller.go:213] "Backend successfully reloaded"
I0918 18:43:25.216258       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress", Name:"ingress-nginx-controller-9756f5bd9-m8hnf", UID:"11e0e27c-4a6d-4a63-b366-14d4185414b7", APIVersion:"v1", ResourceVersion:"1481808", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration

@pinheadmz
Copy link
Contributor

And the ingress is there, and works!

Name:             caddy-ingress
Labels:           app.kubernetes.io/managed-by=Helm
Namespace:        warnet-logging
Address:          45.55.105.105
Ingress Class:    nginx
Default backend:  <default>
Rules:
  Host        Path  Backends
  ----        ----  --------
  *           
              /   caddy:80 (10.244.0.32:80)
Annotations:  meta.helm.sh/release-name: caddy
              meta.helm.sh/release-namespace: warnet-logging
              nginx.ingress.kubernetes.io/ssl-redirect: false
Events:
  Type    Reason  Age                    From                      Message
  ----    ------  ----                   ----                      -------
  Normal  Sync    3m28s (x2 over 6m15s)  nginx-ingress-controller  Scheduled for sync

Screenshot 2024-09-18 at 2 50 11 PM

@pinheadmz
Copy link
Contributor

Ok so this works! yay. just had to wait a minute

Copy link
Contributor

@pinheadmz pinheadmz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ACK 🚀



INGRESS_HELM_COMMANDS = [
"helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

its ingress-nginx even for caddy?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes it is because the ingress controller is provided by nginx and caddy is the reverse proxy. I know I know, layers upon layers but that's how it is!

@pinheadmz pinheadmz mentioned this pull request Sep 19, 2024
@m3dwards
Copy link
Collaborator Author

@pinheadmz I'll add another line in the warnet dashboard output that you will need to wait a minute on the cloud for the load balancer to be provisioned.

@bdp-DrahtBot
Copy link
Collaborator

The following sections might be updated with supplementary metadata relevant to reviewers and maintainers.

Conflicts

Reviewers, this pull request conflicts with the following ones:

  • #614 (swap out kubectl by mplsgrant)
  • #612 (Add --debug option to run scenarios for faster development by pinheadmz)
  • #610 (Offer to install helm into venv by mplsgrant)
  • #598 (Add cmd for creating user kubeconfigs (cont. from add create-kubeconfigs command #545) by josibake)

If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first.

@m3dwards m3dwards merged commit c23b5ac into main Sep 19, 2024
14 checks passed
@m3dwards m3dwards deleted the ingress branch September 19, 2024 23:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants