Skip to content

deas/argocd-conductr

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

38 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation


Argo CD Conductr - GitOps Everything πŸ§ͺ


Report Bug Β· Request Feature

Table of Contents
  1. About The Project
  2. Getting Started
  3. TODO
  4. Known Issues
  5. References
  6. License

About The Project

The primary goal of this project is to exercise with Argo CD based GitOps deployment covering the full cycle - up to production via promotion, if you want to. Experimentation and production should not conflict.

The change process starts at localhost. Hence, we consider kind experience very important. Given that, some elements may be useful in CI context. Most things, should play nice productive environments as well.

Demo using terraform bootstrapping a single node kind cluster showing deployments,statefulsets and daemonsets as they enter their desired state πŸͺ„πŸŽ©πŸ°

Demo

Goals

  • Speed : A fast cycle from localhost to production πŸš€
  • Fail early and loud (notifications)
  • Scalability
  • Simplicity (yes, really)
  • Composability
  • Target kind, vanilla Kubernetes and Openshift including crc

Non Goals

Decisions

We use a single long lived branch main and map environments with directories. Leveraging branches for environment propagation appears easy, but comes with its own set of issues.

We use single level environment staging with one cluster per environment. We do not use names and namespaces in this context, and we don't even dare to do multi-tenancy in a single cluster (OLMv1 drops it). This should help with isolation, loose coupling, support the cattle model and keep things simpler. We want cluster scoped staging. Using another nested level introduces issues ("Matrjoschka Architecture").

We prefer Pull over Push.

We focus on one "Platform Team" managing many clusters using a single repo. It should enable ArgoCD embedding for Application verticals.

Following the App of Apps pattern, our local root Application is at (envs/local). The root app kicks off various ApplicationSets covering similarly shaped (e.g. helm/kustomize) apps hosted in apps. Within that folder, we do not want Argo CD resources. This helps with separation and quick testing cycles.

OLM footprint has a bigger footprint than helm and it comes with its own set of issues as well. It is higher level and way more user friendly. With some components (e.g. Argo CD, Loki, LVM) helm is the second class citizen. With others (e.g. Rook), it's the opposite. We prefer first class citizens. Hence, we default to bring in OLM when it is not there initially (such as on kind).

Features

We cover deployments of:

  • Argo CD (self managed)
  • Argo CD Notifications
  • Argo-CD Image-Updater
  • Argo Rollouts
  • Argo Events
  • Operator Lifecycle Management
  • Metallb
  • Kube-Prometheus
  • Loki/Promtail
  • Velero
  • Cert-Manager
  • AWS Credentials Sync
  • Sealed Secrets
  • SOPS Secrets
  • Submariner
  • Caretta
  • LitmusChaos

Beyond deployments, we feature:

  • mise aiming at a more uniform environment locally and in CI
  • make based tasks
  • Github Actions integration
  • Prometheus Rule Unit Testing
  • A bare bones alerting application in case want to send alerts to very custom receivers (like Matrix Chat Rooms)
  • Open Cluster Management / Submariner Hub and Spoke Setup (WIP)

(back to top)

Getting Started

Some opinions first:

  • YAML at scale is ... terrible. Unfortunately, there is no way around.
  • CI/CD usually comes with horrible DX: β€œ.. it’s this amalgamation of scripts in YAML tied together with duct tape.”
  • CI/CD should enable basic inner loop local development. Should not be git commit -m hoping-for-the-best && git push
  • Naming ... is hard
  • Joining clusters is hard (e.g. Submariner)
  • Beware of Magic 🎩πŸͺ„πŸ° (e.g. Argo CD helm release changes when Prometheus CRDs become available)
  • Beware of helm shared values or kustomize base. We deploy main and shared bits kick in on all environments.
  • Versions/Refs: Pin or Float? It depends. We should probably pin things in critical environments and keep things floating a bit more elsewhere
  • Don't try too hard modeling deps and ordering. Failing to start a few times can be perfectly fine. Honor this modeling your alerts.
  • We should propagate to production frequently.
  • Rebuilding whole things automatically from scratch matters a lot. Drift kicks in fast and it helps with Recovery.
  • Bootstrapping OLM is painful - thanks god, there is a helm chart these days.
  • Using Kubernetes bits in Terraform (e.g. helm, kustomize, kubectl, kubernetes providers), only use the bare minimum (because deps are painful) This is an example of how you may give instructions on setting up your project locally. To get a local copy up and running follow these simple example steps.

Prerequisites

  • make
  • kubectl
  • mise (highly recommended)
  • docker (if using kind)
  • terraform (optional)
  • helm (if not using terraform)

Usage

For basic demo purposes, you can use this public repo. If you want to run against your own, replace the git server reference with your own.

First, you should choose where to start, specifically whether you want to use terraform.

If you don't want to use terraform, you should be starting at the root folder. There is a Makefile with various ad hoc tasks. Simply running

make

should give you some help.

If you want to use terraform, you'll start similarly in the ./tf folder. The terraform module supports deployment to kind clusters.

Our preferred approach to secrets is sealed-secrets (have a look at gen-keys.sh in case you'd like to use sops instead).

If using github, you may want to disable github actions and/or add a public deployment key.

gh repo deploy-key add ...

In the root folder (w/o terraform), you should be checking

make -n argocd-helm-install-basic argocd-apply-root

Run this without -n once you feel confident to get the ball rolling.

The default local deployment will deploy a SealedSecret. It will fail during decryption, because we won't be sharing our key. It is meant to be used with Argo Notifications, so it is not critical for a basic demo. Feel free to introduce your own bootstrap secret.

We want lifecycle of things (Create/Destroy) to be as fast as possible. Pulling images can slow things down significantly. Contrary docker a host based solution (such as k3s), challenges are harder with kind. Make sure to understand your the defails of your painpoints before implementing your solution.

(back to top)

TODO

  • Operator Controller Should Provide a Standard Install Process
  • Improve ad hoc task support (smart branching) for Red Hat OpenShift GitOps (ns, secrets), and Ingress (login)
  • Introduce proper GitOps time travel support (tags/hashes)
  • Improve Openshift harmonization (esp. with regards to naming/namespaces)
  • kind based testing
  • Improve Unit/Integration Test Coverage
  • Prometheus based sync failure alerts (s. known issues)
  • It appear odd that using olm based installation of ocm still requires us to worry about the hub registration-operator.
  • There are TODO tags in code (to provide context)
  • It takes too long for prometheus to get up
  • terraform within Argo CD? (just like in tf-controller)
  • crossplane
  • For kind, we may want to replace Metallb with cloud-provider-kind
  • keycloak + sso (DNS) local trickery
  • Aspire Dashboard? (ultralight oTel)
  • Customer Use Case Demo litmus? Should probably bring the pure chaos bits to Argo CD deas/kaos
  • helm job sample
  • Argo CD Grafana Dashboard
  • Argo CD Service Monitor (depends on prom)
  • Canary-/Green/Blue Deployment (Rollouts)
  • default to auto update everything?
  • Proper self management of Argo CD
  • metrics-server
  • contour?
  • cilium
  • OPA Policies: _Gatekeeper vs usage in CI
  • kubeconform in CI
  • Argo CD +/vs ACM/open cluster management
  • Notifications Sync alerts Slack/Matrix
  • Environment propagation
  • Manage Kubernetes Operators with Argo CD?
  • Try Argo-CD Autopilot
  • Proper cascaded removal. Argo CD should be last. Will likely involve terraform.
  • Applications in any namespace (s. Known Issues)
  • Service Account based OAuth integration on Openshift is nice - but tricky to implement: OpenShift Authentication Integration with Argo CD, Authentication using OpenShift
  • Openshift Proxy/Global Pull Secrets, Global Pull Secrets, Ingress + API Server Certs, IDP Integration
  • Improve Github Actions Quality Gates
  • Tracing Solution (zipkin, tempo)
  • oTel Sample
  • More Grafana Dashboards / Integrations with Openshift Console Plugin
  • Consider migrating make to just
  • Dedupe/Modularize Makefile/Justfile
  • ocm solutions See the open issues for a full list of proposed features (and known issues).
  • OCM : Integration with Argo CD
  • Argo CD rbac/multi tenancy?
  • ACM appears to auto approve CSRs. Open source auto-approvers appear to specifically target cert-manager (CRD) or kubelet. Introduce csr-approver
  • Introduce IPv6 with crc/kvm
  • Go deeper with nix/devenv - maybe even replace mise

(back to top)

Known Issues

References

Contributing

Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.

If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

(back to top)

License

Distributed under the MIT License. See LICENSE.txt for more information.

(back to top)

(back to top)

About

Argo CD Conductr - GitOps Everything πŸ§ͺ

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published