App | Status | Entrypoint |
---|---|---|
bootstrap | n/a | |
sealed-secrets | n/a | |
argo-cd | ArgoCD | |
traefik | Traefik | |
sharevic.net | Sharevic Homepage | |
mbq | Mobile Ticket queue | |
kutuapp-test | KuTu App Test | |
kutuapp | KuTu App | |
kmgetubs19 | KmGeTuBS19 Homepage |
export NIC_IPS=<iprange for metallb>
export ACCESSNAME=<storj.accessname>
export ACCESSGRANT=<storj.accessgrant>
export BACKUP_DATE=<date of backup to restore from>
#bash -c "$(curl -fsSL https://raw.githubusercontent.com/luechtdiode/mk8-argo/master/setup.sh)"
bash -ci "$(curl -fsSL https://raw.githubusercontent.com/luechtdiode/mk8-argo/mk8-128/setup.sh)"
After bootstrapping via argo-cd, some deployments hung stuck sometimes. Follow these steps to solve the issues:
login into kube-dashboard (kubectl get svc -n kube-system, get NodePort, cat .kube/config get token)
- check, whether the loadbalancer MetalLB could take the given nic_ips.
- patch with metallb/metallb-ippool.yaml and the given ip-ranges
- reapply unsynced resources with options force and without schema checking and replace-option
- refresh and wait
- use bootstrap.sh with restoreSecrets. If it doesnt help, reseal an push sealed-secrets to git.
- got to crd argo application, delete hung apps
- source bootstrap.sh
- use boostrapViaArgo to reinstall argo-managed apps, wait until all pods are running
- use restoreAppStates only, if the disk-backups can be reused by the new installed components. In some cases, incompatiblity issues can happen. In such case, use the backuprestore scripts and restore dedicated and with manual parametrization.
- use restoreAppDBStates only, if the disk-backups can be reused by the new installed components. In some cases, incompatiblity issues can happen. In such case, use the backuprestore scripts and restore dedicated and with manual parametrization.