diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 00000000..e69de29b diff --git a/404.html b/404.html new file mode 100644 index 00000000..b4bbc774 --- /dev/null +++ b/404.html @@ -0,0 +1,2091 @@ + + + +
+ + + + + + + + + + + + + + + + +Sveltos is a set of Kubernetes controllers that run in the management cluster. From the management cluster, Sveltos can manage add-ons and applications on a fleet of managed Kubernetes clusters.
+Sveltos comes with support to automatically discover ClusterAPI powered clusters, but it doesn't stop there. You can easily register any other cluster (on-prem, Cloud) and manage Kubernetes add-ons seamlessly.
+ClusterProfile and Profile are the CustomerResourceDefinitions used to instruct Sveltos which add-ons to deploy on a set of clusters.
+ClusterProfile: It iss a cluster-wide resource. It can match any cluster and reference any resource regardless of their namespace.
+Profile: It is a namespace-scoped resource that is specific to a single namespace. It can only match clusters and reference resources within its own namespace.
+By creating a ClusterProfile instance, you can easily deploy the below across a set of Kubernetes clusters.
+Define which Kubernetes add-ons to deploy and where:
+It is as simple as that!
+The below example deploys a Kyverno helm chart in every cluster with the label selector set to env=prod.
+The first step is to ensure the CAPI clusters are successfully registered with Sveltos. If you have not registered the clusters yet, follow the instructions mentioned here.
+If you have already registered the CAPI clusters, ensure they are listed and ready to receive add-ons.
+$ kubectl get sveltosclusters -n projectsveltos --show-labels
+
+NAME READY VERSION LABELS
+cluster12 true v1.26.9+rke2r1 sveltos-agent=present
+cluster13 true v1.26.9+rke2r1 sveltos-agent=present
+
Please note: The CAPI clusters are registered in the projectsveltos namespace. If you register the clusters in a different namespace, update the command above.
+The second step is to assign a specific label to the Sveltos Clusters to receive specific add-ons. In this example, we will assign the label env=prod.
+$ kubectl label sveltosclusters cluster12 env=prod -n projectsveltos
+$ kubectl label sveltosclusters cluster13 env=prod -n projectsveltos
+$ kubectl get sveltosclusters -n projectsveltos --show-labels
+
+NAME READY VERSION LABELS
+cluster12 true v1.26.9+rke2r1 env=prod,sveltos-agent=present
+cluster13 true v1.26.9+rke2r1 env=prod,sveltos-agent=present
+
The third step is to create a ClusterProfile Kubernetes resource and apply it to the management cluster.
+apiVersion: config.projectsveltos.io/v1alpha1
+kind: ClusterProfile
+metadata:
+ name: kyverno
+spec:
+ clusterSelector: env=prod
+ syncMode: Continuous
+ helmCharts:
+ - repositoryURL: https://kyverno.github.io/kyverno/
+ repositoryName: kyverno
+ chartName: kyverno/kyverno
+ chartVersion: v3.1.1
+ releaseName: kyverno-latest
+ releaseNamespace: kyverno
+ helmChartAction: Install
+
$ kubectl apply -f "kyverno_cluster_profile.yaml"
+
+$ sveltosctl show addons
+
++--------------------------+---------------+-----------+----------------+---------+-------------------------------+------------------+
+| CLUSTER | RESOURCE TYPE | NAMESPACE | NAME | VERSION | TIME | CLUSTER PROFILES |
++--------------------------+---------------+-----------+----------------+---------+-------------------------------+------------------+
+| projectsveltos/cluster12 | helm chart | kyverno | kyverno-latest | 3.1.1 | 2023-12-16 00:14:17 -0800 PST | kyverno |
+| projectsveltos/cluster13 | helm chart | kyverno | kyverno-latest | 3.1.1 | 2023-12-16 00:14:17 -0800 PST | kyverno |
++--------------------------+---------------+-----------+----------------+---------+-------------------------------+------------------+
+
Note: If you are not aware of the sveltosctl
utility, have a look at the installation documentation found here.
For a quick add-ons example, watch the Sveltos introduction video on YouTube.
+ + + + + + + + + + + + + + + + +ClusterProfile is the CustomerResourceDefinition used to instruct Sveltos which add-ons to deploy on a set of clusters.
+clusterSelector field selects a set of managed clusters where listed add-ons and applications will be deployed.
+ +helmCharts field consists of a list of helm charts to be deployed to the clusters matching clusterSelector;
+ helmCharts:
+ - repositoryURL: https://kyverno.github.io/kyverno/
+ repositoryName: kyverno
+ chartName: kyverno/kyverno
+ chartVersion: v3.0.1
+ releaseName: kyverno-latest
+ releaseNamespace: kyverno
+ helmChartAction: Install
+
policyRefs field references a list of ConfigMaps/Secrets, each containing Kubernetes resources to be deployed in the clusters matching clusterSelector.
+This field is a slice of PolicyRef structs. Each PolictRef has the following fields:
+policyRefs:
+- kind: Secret
+ name: my-secret-1
+ namespace: my-namespace-1
+ deploymentType: Local
+- kind: Remote
+ name: my-configmap-1
+ namespace: my-namespace-1
+ deploymentType: Remote
+
kustomizationRefs field is a list of sources containing kustomization files. Resources will be deployed in the clusters matching the clusterSelector specified.
+This field is a slice of KustomizationRef structs. Each KustomizationRef has the following fields:
+Kind: The kind of the referenced resource. The supported kinds are:
+Namespace: The namespace of the referenced resource. This field is optional and can be left empty. If it is empty, the namespace will be set to the cluster's namespace.
+This field can be set to:
+Let's take a closer look at the OneTime syncMode option. Once you deploy a ClusterProfile with a OneTime configuration, Sveltos will check all of your clusters for a match with the clusterSelector. Any matching clusters will have the resources specified in the ClusterProfile deployed. However, if you make changes to the ClusterProfile later on, those changes will not be automatically deployed to already-matching clusters.
+Now, if you're looking for real-time deployment and updates, the Continuous syncMode is the way to go. With Continuous, any changes made to the ClusterProfile will be immediately reconciled into matching clusters. This means that you can add new features, update existing ones, and remove them as necessary, all without lifting a finger. Sveltos will deploy, update, or remove resources in matching clusters as needed, making your life as a Kubernetes admin a breeze.
+ContinuousWithDriftDetection instructs Sveltos to monitor the state of managed clusters and detect a configuration drift for any of the resources deployed because of that ClusterProfile. +When Sveltos detects a configuration drift, it automatically re-syncs the cluster state back to the state described in the management cluster. +To know more about configuration drift detection, refer to this section.
+Imagine you're about to make some important changes to your ClusterProfile, but you're not entirely sure what the results will be. You don't want to risk causing any unwanted side effects, right? Well, that's where the DryRun syncMode configuration comes in. By deploying your ClusterProfile with this configuration, you can launch a simulation of all the operations that would normally be executed in a live run. The best part? No actual changes will be made to the matching clusters during this dry run workflow, so you can rest easy knowing that there won't be any surprises. +To know more about dry run, refer to this section.
+The stopMatchingBehavior field specifies the behavior when a cluster no longer matches a ClusterProfile. By default, all Kubernetes resources and Helm charts deployed to the cluster will be removed. However, if StopMatchingBehavior is set to LeavePolicies, any policies deployed by the ClusterProfile will remain in the cluster.
+For instance
+apiVersion: config.projectsveltos.io/v1alpha1
+kind: ClusterProfile
+metadata:
+ name: kyverno
+spec:
+ stopMatchingBehavior: WithdrawPolicies
+ clusterSelector: env=prod
+ helmCharts:
+ - repositoryURL: https://kyverno.github.io/kyverno/
+ repositoryName: kyverno
+ chartName: kyverno/kyverno
+ chartVersion: v3.0.1
+ releaseName: kyverno-latest
+ releaseNamespace: kyverno
+ helmChartAction: Install
+
When a cluster matches the ClusterProfile, Kyverno Helm chart will be deployed in such a cluster. If the cluster's labels are subsequently modified and cluster no longer matches the ClusterProfile, the Kyverno Helm chart will be uninstalled. However, if the stopMatchingBehavior property is set to LeavePolicies, Sveltos will retain the Kyverno Helm chart in the cluster.
+The reloader property determines whether rolling upgrades should be triggered for Deployment, StatefulSet, or DaemonSet instances managed by Sveltos and associated with this ClusterProfile when changes are made to mounted ConfigMaps or Secrets. +When set to true, Sveltos automatically initiates rolling upgrades for affected Deployment, StatefulSet, or DaemonSet instances whenever any mounted ConfigMap or Secret is modified. This ensures that the latest configuration updates are applied to the respective workloads.
+Please refer to this section for more information.
+A ClusterProfile might match more than one cluster. When a change is maded to a ClusterProfile, by default all matching clusters are update concurrently. +The maxUpdate field specifies the maximum number of Clusters that can be updated concurrently during an update operation triggered by changes to the ClusterProfile's add-ons or applications. +The specified value can be an absolute number (e.g., 5) or a percentage of the desired cluster count (e.g., 10%). The default value is 100%, allowing all matching Clusters to be updated simultaneously. +For instance, if set to 30%, when modifications are made to the ClusterProfile's add-ons or applications, only 30% of matching Clusters will be updated concurrently. Updates to the remaining matching Clusters will only commence upon successful completion of updates in the initially targeted Clusters. This approach ensures a controlled and manageable update process, minimizing potential disruptions to the overall cluster environment. +Please refer to this section for more information.
+The validateHealths property defines a set of Lua functions that Sveltos executes against the managed cluster to assess the health and status of the add-ons and applications specified in the ClusterProfile. These Lua functions act as validation checks, ensuring that the deployed add-ons and applications are functioning properly and aligned with the desired state. By executing these functions, Sveltos proactively identifies any potential issues or misconfigurations that could arise, maintaining the overall health and stability of the managed cluster.
+The ValidateHealths property accepts a slice of Lua functions, where each function encapsulates a specific validation check. These functions can access the managed cluster's state to perform comprehensive checks on the add-ons and applications. The results of the validation checks are aggregated and reported back to Sveltos, providing valuable insights into the health and status of the managed cluster's components.
+Lua's scripting capabilities offer flexibility in defining complex validation logic tailored to specific add-ons or applications.
+Please refer to this section for more information.
+Consider a scenario where a new cluster with the label env:prod is created. The following instructions guide Sveltos to:
+apiVersion: config.projectsveltos.io/v1alpha1
+kind: ClusterProfile
+metadata:
+ name: kyverno
+spec:
+ clusterSelector: env=prod
+ helmCharts:
+ - repositoryURL: https://kyverno.github.io/kyverno/
+ repositoryName: kyverno
+ chartName: kyverno/kyverno
+ chartVersion: v3.0.1
+ releaseName: kyverno-latest
+ releaseNamespace: kyverno
+ helmChartAction: Install
+ validateHealths:
+ - name: deployment-health
+ featureID: Helm
+ group: "apps"
+ version: "v1"
+ kind: "Deployment"
+ namespace: kyverno
+ script: |
+ function evaluate()
+ hs = {}
+ hs.healthy = false
+ hs.message = "available replicas not matching requested replicas"
+ if obj.status ~= nil then
+ if obj.status.availableReplicas ~= nil then
+ if obj.status.availableReplicas == obj.spec.replicas then
+ hs.healthy = true
+ end
+ end
+ end
+ return hs
+ end
+
The templateResourceRefs property specifies a collection of resources to be gathered from the management cluster. The values extracted from these resources will be utilized to instantiate templates embedded within referenced PolicyRefs and Helm charts. +Refer to template section for more info and examples.
+The dependsOn property specifies a list of other ClusterProfiles that this instance relies on. In any managed cluster that matches to this ClusterProfile, the add-ons and applications defined in this instance will only be deployed after all add-ons and applications in the designated dependency ClusterProfiles have been successfully deployed.
+For example, clusterprofile-a can depend on another clusterprofile-b. This implies that any Helm charts or raw YAML files associated with ClusterProfile A will not be deployed until all add-ons and applications specified in ClusterProfile B have been successfully provisioned.
+ + + + + + + + + + + + + + + + + +A ClusterProfile can have a combination of Helm charts, raw YAML/JSON, and Kustomize configurations.
+Consider a scenario where you want to utilize Kyverno to prevent the deployment of images with the 'latest' tag1. To achieve this, you can create a ClusterProfile that:
+Download the Kyverno policy and create a ConfigMap containing the policy within the management cluster.
+$ wget https://raw.githubusercontent.com/kyverno/policies/main/best-practices/disallow-latest-tag/disallow-latest-tag.yaml
+$ kubectl create configmap disallow-latest-tag --from-file disallow-latest-tag.yaml
+
To deploy Kyverno and a ClusterPolicy across all managed clusters matching the Sveltos label selector env=fv, utilize the below ClusterProfile."
+ apiVersion: config.projectsveltos.io/v1alpha1
+ kind: ClusterProfile
+ metadata:
+ name: kyverno
+ spec:
+ clusterSelector: env=fv
+ helmCharts:
+ - chartName: kyverno/kyverno
+ chartVersion: v3.0.1
+ helmChartAction: Install
+ releaseName: kyverno-latest
+ releaseNamespace: kyverno
+ repositoryName: kyverno
+ repositoryURL: https://kyverno.github.io/kyverno/
+ policyRefs:
+ - kind: ConfigMap
+ name: disallow-latest-tag
+ namespace: default
+
The ':latest' tag is mutable and can lead to unexpected errors if the image changes. A best practice is to use an immutable tag that maps to a specific version of an application Pod. ↩
+Sveltos can seamlessly integrate with Flux to automatically deploy YAML manifests stored in a Git repository or a Bucket. This powerful combination allows you to manage Kubernetes configurations in a central location and leverage Sveltos to target deployments across clusters.
+Imagine a repository like this containing a nginx-ingress directory with all the YAML needed to deploy Nginx2.
+Below, we demonstrate how to leverage Flux and Sveltos to automatically perform the deployment.
+Install and run Flux in the management cluster and configure it to synchronise the Git repository containing the Nginx manifests. More information about the Flux installation can be found here.
+Use a GitRepository resource similar to the below.
+apiVersion: source.toolkit.fluxcd.io/v1
+kind: GitRepository
+metadata:
+ name: flux-system
+ namespace: flux-system
+spec:
+ interval: 1m0s
+ ref:
+ branch: main
+ secretRef:
+ name: flux-system
+ timeout: 60s
+ url: https://github.com/gianlucam76/yaml_flux.git
+
Define a Sveltos ClusterProfile referencing the flux-system GitRepository and specify the nginx-ingress directory as the source of the deployment.
+apiVersion: config.projectsveltos.io/v1alpha1
+kind: ClusterProfile
+metadata:
+ name: deploy-nginx-ingress
+spec:
+ clusterSelector: env=fv
+ policyRefs:
+ - kind: GitRepository
+ name: flux-system
+ namespace: flux-system
+ path: nginx-ingress
+
This ClusterProfile targets clusters with the env=fv label and fetches relevant deployment information from the nginx-ingress directory within the flux-system Git repository managed by Flux.
+$ sveltosctl show addons
++-----------------------------+----------------------------------------------+-----------+---------------------------------------+---------+-------------------------------+-------------------------------------+
+| CLUSTER | RESOURCE TYPE | NAMESPACE | NAME | VERSION | TIME | PROFILES |
++-----------------------------+----------------------------------------------+-----------+---------------------------------------+---------+-------------------------------+-------------------------------------+
+| default/clusterapi-workload | :ConfigMap | default | nginx-ingress-leader | N/A | 2024-03-23 11:43:10 +0100 CET | ClusterProfile/deploy-nginx-ingress |
+| default/clusterapi-workload | rbac.authorization.k8s.io:ClusterRole | | nginx-stable-nginx-ingress | N/A | 2024-03-23 11:43:10 +0100 CET | ClusterProfile/deploy-nginx-ingress |
+| default/clusterapi-workload | rbac.authorization.k8s.io:RoleBinding | default | nginx-stable-nginx-ingress | N/A | 2024-03-23 11:43:10 +0100 CET | ClusterProfile/deploy-nginx-ingress |
+| default/clusterapi-workload | apps:Deployment | default | nginx-stable-nginx-ingress-controller | N/A | 2024-03-23 11:43:10 +0100 CET | ClusterProfile/deploy-nginx-ingress |
+| default/clusterapi-workload | :ServiceAccount | default | nginx-stable-nginx-ingress | N/A | 2024-03-23 11:43:10 +0100 CET | ClusterProfile/deploy-nginx-ingress |
+| default/clusterapi-workload | :ConfigMap | default | nginx-stable-nginx-ingress | N/A | 2024-03-23 11:43:10 +0100 CET | ClusterProfile/deploy-nginx-ingress |
+| default/clusterapi-workload | rbac.authorization.k8s.io:ClusterRoleBinding | | nginx-stable-nginx-ingress | N/A | 2024-03-23 11:43:10 +0100 CET | ClusterProfile/deploy-nginx-ingress |
+| default/clusterapi-workload | rbac.authorization.k8s.io:Role | default | nginx-stable-nginx-ingress | N/A | 2024-03-23 11:43:10 +0100 CET | ClusterProfile/deploy-nginx-ingress |
+| default/clusterapi-workload | :Service | default | nginx-stable-nginx-ingress-controller | N/A | 2024-03-23 11:43:10 +0100 CET | ClusterProfile/deploy-nginx-ingress |
+| default/clusterapi-workload | networking.k8s.io:IngressClass | | nginx | N/A | 2024-03-23 11:43:10 +0100 CET | ClusterProfile/deploy-nginx-ingress |
++-----------------------------+----------------------------------------------+-----------+---------------------------------------+---------+-------------------------------+-------------------------------------+
+
Install and run Flux in your management cluster and configure it to synchronise the Git repository containing the Kyverno manifests.
+Use a GitRepository resource similar to the below.
+apiVersion: source.toolkit.fluxcd.io/v1
+kind: GitRepository
+metadata:
+ name: flux-system
+ namespace: flux-system
+spec:
+ interval: 1m0s
+ ref:
+ branch: main
+ secretRef:
+ name: flux-system
+ timeout: 60s
+ url: https://github.com/gianlucam76/yaml_flux.git
+
Define a ClusterProfile to deploy the Kyverno helm chart.
+apiVersion: config.projectsveltos.io/v1alpha1
+kind: ClusterProfile
+metadata:
+ name: deploy-kyverno
+spec:
+ clusterSelector: env=fv
+ syncMode: Continuous
+ helmCharts:
+ - repositoryURL: https://kyverno.github.io/kyverno/
+ repositoryName: kyverno
+ chartName: kyverno/kyverno
+ chartVersion: v3.1.4
+ releaseName: kyverno-latest
+ releaseNamespace: kyverno
+ helmChartAction: Install
+
Define a Sveltos ClusterProfile referencing the flux-system GitRepository and defining the _kyverno__ directory as the source of the deployment.
+This directory contains a list of Kyverno ClusterPolicies.
+apiVersion: config.projectsveltos.io/v1alpha1
+kind: ClusterProfile
+metadata:
+ name: deploy-kyverno-policies
+spec:
+ clusterSelector: env=fv
+ policyRefs:
+ - kind: GitRepository
+ name: flux-system
+ namespace: flux-system
+ path: kyverno
+ dependsOn:
+ - deploy-kyverno
+
This ClusterProfile targets clusters with the env=fv label and fetches relevant deployment information from the _kyverno__ directory within the flux-system Git repository managed by Flux.
+The Kyverno Helm chart and all the Kyverno policies contained in the Git repository under the kyverno directory are deployed:
+$ sveltosctl show addons
++-----------------------------+--------------------------+-----------+---------------------------+---------+-------------------------------+----------------------------------------+
+| CLUSTER | RESOURCE TYPE | NAMESPACE | NAME | VERSION | TIME | PROFILES |
++-----------------------------+--------------------------+-----------+---------------------------+---------+-------------------------------+----------------------------------------+
+| default/clusterapi-workload | helm chart | kyverno | kyverno-latest | 3.1.4 | 2024-03-23 11:39:30 +0100 CET | ClusterProfile/deploy-kyverno |
+| default/clusterapi-workload | kyverno.io:ClusterPolicy | | restrict-image-registries | N/A | 2024-03-23 11:40:11 +0100 CET | ClusterProfile/deploy-kyverno-policies |
+| default/clusterapi-workload | kyverno.io:ClusterPolicy | | disallow-latest-tag | N/A | 2024-03-23 11:40:11 +0100 CET | ClusterProfile/deploy-kyverno-policies |
+| default/clusterapi-workload | kyverno.io:ClusterPolicy | | require-ro-rootfs | N/A | 2024-03-23 11:40:11 +0100 CET | ClusterProfile/deploy-kyverno-policies |
++-----------------------------+--------------------------+-----------+---------------------------+---------+-------------------------------+----------------------------------------+
+
The content within the Git repository or other sources referenced by a Sveltos ClusterProfile can be templates1.To enable templating, annotate the referenced GitRepository
instance with "projectsveltos.io/template: true".
When Sveltos processes the template, it will perform the below.
+This allows dynamic deployment customisation based on the specific characteristics of the clusters, further enhancing flexibility and automation.
+Let's try it out! The content in the "template" directory of this repository serves as the perfect example.
+# Sveltos will instantiate this template before deploying to matching managed cluster
+# Sveltos will get the ClusterAPI Cluster instance representing the cluster in the
+# managed cluster, and use that resource data to instintiate this ConfigMap before
+# deploying it
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: {{ .Cluster.metadata.name }}
+ namespace: default
+data:
+ controlPlaneEndpoint: "{{ .Cluster.spec.controlPlaneEndpoint.host }}:{{ .Cluster.spec.controlPlaneEndpoint.port }}"
+
Add the projectsveltos.io/template: "true" annotation to the GitRepository resources created further above.
+apiVersion: source.toolkit.fluxcd.io/v1
+kind: GitRepository
+metadata:
+ name: flux-system
+ namespace: flux-system
+ annotations:
+ projectsveltos.io/template: "true"
+spec:
+ interval: 1m0s
+ ref:
+ branch: main
+ secretRef:
+ name: flux-system
+ timeout: 60s
+ url: https://github.com/gianlucam76/yaml_flux.git
+
apiVersion: config.projectsveltos.io/v1alpha1
+kind: ClusterProfile
+metadata:
+ name: flux-template-example
+spec:
+ clusterSelector: env=fv
+ policyRefs:
+ - kind: GitRepository
+ name: flux-system
+ namespace: flux-system
+ path: template
+
The ClusterProfile will use the information from the "Cluster" resource in the management cluster to populate the template and deploy it.
+An example of a deployed ConfigMap in the managed cluster can be found below.
+apiVersion: v1
+data:
+ controlPlaneEndpoint: 172.18.0.4:6443
+kind: ConfigMap
+metadata:
+ ...
+ name: clusterapi-workload
+ namespace: default
+ ...
+
Remember to adapt the provided resources to your specific repository structure, cluster configuration, and desired templating logic.
+ + + + + + + + + + + + + + + + + +The ClusterProfile spec.helmCharts can list a number of Helm charts to get deployed to the managed clusters with a specific label selector.
+Please note: Sveltos will deploy the Helm charts in the exact order they are defined (top-down approach).
+apiVersion: config.projectsveltos.io/v1alpha1
+kind: ClusterProfile
+metadata:
+ name: kyverno
+spec:
+ clusterSelector: env=prod
+ helmCharts:
+ - repositoryURL: https://kyverno.github.io/kyverno/
+ repositoryName: kyverno
+ chartName: kyverno/kyverno
+ chartVersion: v3.1.1
+ releaseName: kyverno-latest
+ releaseNamespace: kyverno
+ helmChartAction: Install
+
In the above YAML definition, we install Kyverno on a managed cluster with the label selector set to env=prod.
+apiVersion: config.projectsveltos.io/v1alpha1
+kind: ClusterProfile
+metadata:
+ name: prometheus-grafana
+spec:
+ clusterSelector: env=fv
+ helmCharts:
+ - repositoryURL: https://prometheus-community.github.io/helm-charts
+ repositoryName: prometheus-community
+ chartName: prometheus-community/prometheus
+ chartVersion: 23.4.0
+ releaseName: prometheus
+ releaseNamespace: prometheus
+ helmChartAction: Install
+ - repositoryURL: https://grafana.github.io/helm-charts
+ repositoryName: grafana
+ chartName: grafana/grafana
+ chartVersion: 6.58.9
+ releaseName: grafana
+ releaseNamespace: grafana
+ helmChartAction: Install
+
In the above YAML definition, we first install the Prometheus community Helm chart and afterwards the Grafana Helm chart. The two defined Helm charts will get deployed on a managed cluster with the label selector set to env=fv.
+apiVersion: config.projectsveltos.io/v1alpha1
+kind: ClusterProfile
+metadata:
+ name: kyverno
+spec:
+ clusterSelector: env=fv
+ syncMode: Continuous
+ helmCharts:
+ - repositoryURL: https://kyverno.github.io/kyverno/
+ repositoryName: kyverno
+ chartName: kyverno/kyverno
+ chartVersion: v3.1.1
+ releaseName: kyverno-latest
+ releaseNamespace: kyverno
+ helmChartAction: Install
+ values: |
+ admissionController:
+ replicas: 1
+
Sveltos allows you to manage Helm chart values using ConfigMaps/Secrets.
+For instance, we can create a file cleanup-controller.yaml with following content
+cleanupController:
+ livenessProbe:
+ httpGet:
+ path: /health/liveness
+ port: 9443
+ scheme: HTTPS
+ initialDelaySeconds: 16
+ periodSeconds: 31
+ timeoutSeconds: 5
+ failureThreshold: 2
+ successThreshold: 1
+
then create a ConfigMap with it:
+ +We can then create another file admission_controller.yaml with following content:
+admissionController:
+ readinessProbe:
+ httpGet:
+ path: /health/readiness
+ port: 9443
+ scheme: HTTPS
+ initialDelaySeconds: 6
+ periodSeconds: 11
+ timeoutSeconds: 5
+ failureThreshold: 6
+ successThreshold: 1
+
then create a ConfigMap with it:
+ +Within your Sveltos ClusterProfile YAML, define the helmCharts section. Here, you specify the Helm chart details and leverage valuesFrom to reference the ConfigMaps. +This injects the probe configurations from the ConfigMaps into the Helm chart values during deployment.
+apiVersion: config.projectsveltos.io/v1alpha1
+kind: ClusterProfile
+metadata:
+ name: kyverno
+spec:
+ clusterSelector: env=fv
+ syncMode: Continuous
+ helmCharts:
+ - repositoryURL: https://kyverno.github.io/kyverno/
+ repositoryName: kyverno
+ chartName: kyverno/kyverno
+ chartVersion: v3.1.1
+ releaseName: kyverno-latest
+ releaseNamespace: kyverno
+ helmChartAction: Install
+ values: |
+ admissionController:
+ replicas: 1
+ valuesFrom:
+ - kind: ConfigMap
+ name: cleanup-controller
+ namespace: default
+ - kind: ConfigMap
+ name: admission-controller
+ namespace: default
+
Both the values section and the content stored in referenced ConfigMaps and Secrets can be written using templates. +Sveltos will instantiate these templates using resources in the management cluster. Finally Sveltos deploy the Helm chart with the final, resolved values.
+apiVersion: config.projectsveltos.io/v1alpha1
+kind: ClusterProfile
+metadata:
+ name: deploy-calico
+spec:
+ clusterSelector: env=prod
+ helmCharts:
+ - repositoryURL: https://projectcalico.docs.tigera.io/charts
+ repositoryName: projectcalico
+ chartName: projectcalico/tigera-operator
+ chartVersion: v3.24.5
+ releaseName: calico
+ releaseNamespace: tigera-operator
+ helmChartAction: Install
+ values: |
+ installation:
+ calicoNetwork:
+ ipPools:
+ {{ range $cidr := .Cluster.spec.clusterNetwork.pods.cidrBlocks }}
+ - cidr: {{ $cidr }}
+ encapsulation: VXLAN
+ {{ end }}
+
apiVersion: config.projectsveltos.io/v1alpha1
+kind: ClusterProfile
+metadata:
+ name: deploy-cilium-v1-26
+spec:
+ clusterSelector: env=fv
+ helmCharts:
+ - chartName: cilium/cilium
+ chartVersion: 1.12.12
+ helmChartAction: Install
+ releaseName: cilium
+ releaseNamespace: kube-system
+ repositoryName: cilium
+ repositoryURL: https://helm.cilium.io/
+ values: |
+ k8sServiceHost: "{{ .Cluster.spec.controlPlaneEndpoint.host }}"
+ k8sServicePort: "{{ .Cluster.spec.controlPlaneEndpoint.port }}"
+ hubble:
+ enabled: false
+ nodePort:
+ enabled: true
+ kubeProxyReplacement: strict
+ operator:
+ replicas: 1
+ updateStrategy:
+ rollingUpdate:
+ maxSurge: 0
+ maxUnavailable: 1
+
For OCI charts, please note that the chartName needs to have whole URL.
+apiVersion: config.projectsveltos.io/v1alpha1
+kind: ClusterProfile
+metadata:
+ name: vault
+spec:
+ clusterSelector: env=fv
+ syncMode: Continuous
+ helmCharts:
+ - repositoryURL: oci://registry-1.docker.io/bitnamicharts/vault
+ repositoryName: oci-vault
+ chartName: oci://registry-1.docker.io/bitnamicharts/vault
+ chartVersion: 0.7.2
+ releaseName: vault
+ releaseNamespace: vault
+ helmChartAction: Install
+
The below YAML snippet demonstrates how Sveltos utilizes a Flux GitRepository. The git repository, located at https://github.com/gianlucam76/kustomize, comprises multiple kustomize directories. In this example, Sveltos executes Kustomize on the helloWorld
directory and deploys the Kustomize output to the eng
namespace for every managed cluster matching the Sveltos clusterSelector.
apiVersion: config.projectsveltos.io/v1alpha1
+kind: ClusterProfile
+metadata:
+ name: flux-system
+spec:
+ clusterSelector: env=fv
+ syncMode: Continuous
+ kustomizationRefs:
+ - namespace: flux-system
+ name: flux-system
+ kind: GitRepository
+ path: ./helloWorld/
+ targetNamespace: eng
+
apiVersion: source.toolkit.fluxcd.io/v1
+kind: GitRepository
+metadata:
+ name: flux-system
+ namespace: flux-system
+spec:
+ interval: 1m0s
+ ref:
+ branch: main
+ secretRef:
+ name: flux-system
+ timeout: 60s
+ url: ssh://git@github.com/gianlucam76/kustomize
+
$ sveltosctl show addons
++-------------------------------------+-----------------+-----------+----------------+---------+-------------------------------+------------------+
+| CLUSTER | RESOURCE TYPE | NAMESPACE | NAME | VERSION | TIME | CLUSTER PROFILES |
++-------------------------------------+-----------------+-----------+----------------+---------+-------------------------------+------------------+
+| default/sveltos-management-workload | apps:Deployment | eng | the-deployment | N/A | 2023-05-16 00:48:11 -0700 PDT | flux-system |
+| default/sveltos-management-workload | :Service | eng | the-service | N/A | 2023-05-16 00:48:11 -0700 PDT | flux-system |
+| default/sveltos-management-workload | :ConfigMap | eng | the-map | N/A | 2023-05-16 00:48:11 -0700 PDT | flux-system |
++-------------------------------------+-----------------+-----------+----------------+---------+-------------------------------+------------------+
+
The Kustomize build process can generate parameterized YAML manifests. Sveltos can then instantiate these manifests using values provided in two locations:
+spec.kustomizationRefs.Values
: This field defines a list of key-value pairs directly within the ClusterProfile. These values are readily available for Sveltos to substitute into the template.spec.kustomizationRefs.ValuesFrom
: This field allows referencing external sources like ConfigMaps or Secrets. Their data sections contain key-value pairs that Sveltos can inject during template instantiation.Consider a Kustomize build output that includes a template for a deployment manifest:
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: myapp-deployment
+ namespace: test
+ labels:
+ region: {{ default "west" .Region }} # Placeholder for region with default value "west"
+spec:
+ ...
+ image: nginx:{{ .Version }} # Placeholder for image version
+
Now, imagine Sveltos receives a ClusterProfile containing the following key-value pairs:
+apiVersion: config.projectsveltos.io/v1alpha1
+kind: ClusterProfile
+metadata:
+ name: flux-system
+spec:
+ kustomizationRefs:
+ - deploymentType: Remote
+ kind: GitRepository
+ name: flux-system
+ namespace: flux-system
+ path: ./template/helloWorld/
+ targetNamespace: eng
+ values:
+ Region: east
+ Version: v1.2.0
+
During deployment, Sveltos injects these values into the template, replacing the placeholders:
+This process transforms the template into the following concrete deployment manifest:
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: myapp-deployment
+ namespace: test
+ labels:
+ region: east # Replaced value
+spec:
+ ...
+ image: nginx:v1.2.0 # Replaced value
+
Sveltos offers the capability to define key-value pairs where the value itself can be another template. This nested template can reference resources present in the management cluster.
+For example, consider the following key-value pair within a ClusterProfile:
+ +In this scenario, the value Region isn't a static string, but a template referencing the .Cluster.metadata.labels.region property. During deployment, Sveltos retrieves information from the management cluster's Cluster instance (represented here as .Cluster). It then extracts the value associated with the "region" label using the index function and assigns it to the Region key-value pair.
+This mechanism allows you to dynamically populate values based on the management cluster's configuration, ensuring deployments adapt to specific environments.
+This summary outlines how Sveltos manages deployments using Kustomize and key-value pairs:
+Value Collection: Sveltos gathers key-value pairs for deployment customization from two sources:
+Optional: Nested Template Processing (Advanced Usage): For advanced scenarios, a key-value pair's value itself can be a template. Sveltos evaluates these nested templates using data available in the context, such as information from the management cluster. This allows dynamic value construction based on the management cluster's configuration.
+This process ensures that deployments are customized with appropriate values based on the ClusterProfile configuration and, optionally, the management cluster's state.
+This is a fully working example:
+template/helloWorld
is a templateValues
field) are expressed as template, so Sveltos will instatiate those using the Cluster instanceapiVersion: config.projectsveltos.io/v1alpha1
+kind: ClusterProfile
+metadata:
+ name: flux-system
+spec:
+ clusterSelector: env=fv
+ kustomizationRefs:
+ - deploymentType: Remote
+ kind: GitRepository
+ name: flux-system
+ namespace: flux-system
+ path: ./template/helloWorld/
+ targetNamespace: eng
+ values:
+ Region: '{{ index .Cluster.metadata.labels "region" }}'
+ Version: v1.2.0
+ reloader: false
+ stopMatchingBehavior: WithdrawPolicies
+ syncMode: Continuous
+
with GitRepository
+apiVersion: source.toolkit.fluxcd.io/v1
+kind: GitRepository
+metadata:
+ name: flux-system
+ namespace: flux-system
+ ...
+spec:
+ interval: 1m0s
+ ref:
+ branch: main
+ secretRef:
+ name: flux-system
+ timeout: 60s
+ url: https://github.com/gianlucam76/kustomize.git
+
sveltosctl show addons
++-----------------------------+-----------------+-----------+----------------+---------+--------------------------------+----------------------------+
+| CLUSTER | RESOURCE TYPE | NAMESPACE | NAME | VERSION | TIME | PROFILES |
++-----------------------------+-----------------+-----------+----------------+---------+--------------------------------+----------------------------+
+| default/clusterapi-workload | apps:Deployment | eng | the-deployment | N/A | 2024-05-01 11:43:54 +0200 CEST | ClusterProfile/flux-system |
+| default/clusterapi-workload | :Service | eng | the-service | N/A | 2024-05-01 11:43:54 +0200 CEST | ClusterProfile/flux-system |
+| default/clusterapi-workload | :ConfigMap | eng | the-map | N/A | 2024-05-01 11:43:54 +0200 CEST | ClusterProfile/flux-system |
++-----------------------------+-----------------+-----------+----------------+---------+--------------------------------+----------------------------+
+
If you have directories containing Kustomize resources, you can include them in a ConfigMap (or a Secret) and have a ClusterProfile reference it.
+In this example, we are cloning the git repository https://github.com/gianlucam76/kustomize
locally, and then we create a kustomize.tar.gz
with the content of the helloWorldWithOverlays directory.
$ git clone git@github.com:gianlucam76/kustomize.git
+
+$ tar -czf kustomize.tar.gz -C kustomize/helloWorldWithOverlays .
+
+$ kubectl create configmap kustomize --from-file=kustomize.tar.gz
+
The below ClusterProfile will use the Kustomize SDK to get all the resources needed for deployment. Then will deploy these in the production
namespace of the managed clusters with the Sveltos clusterSelector set to env=fv.
apiVersion: config.projectsveltos.io/v1alpha1
+kind: ClusterProfile
+metadata:
+ name: kustomize-with-configmap
+spec:
+ clusterSelector: env=fv
+ syncMode: Continuous
+ kustomizationRefs:
+ - namespace: default
+ name: kustomize
+ kind: ConfigMap
+ path: ./overlays/production/
+ targetNamespace: production
+
$ sveltosctl show addons
++-------------------------------------+-----------------+------------+---------------------------+---------+-------------------------------+--------------------------+
+| CLUSTER | RESOURCE TYPE | NAMESPACE | NAME | VERSION | TIME | CLUSTER PROFILES |
++-------------------------------------+-----------------+------------+---------------------------+---------+-------------------------------+--------------------------+
+| default/sveltos-management-workload | apps:Deployment | production | production-the-deployment | N/A | 2023-05-16 00:59:13 -0700 PDT | kustomize-with-configmap |
+| default/sveltos-management-workload | :Service | production | production-the-service | N/A | 2023-05-16 00:59:13 -0700 PDT | kustomize-with-configmap |
+| default/sveltos-management-workload | :ConfigMap | production | production-the-map | N/A | 2023-05-16 00:59:13 -0700 PDT | kustomize-with-configmap |
++-------------------------------------+-----------------+------------+---------------------------+---------+-------------------------------+--------------------------+
+
Profile is the CustomerResourceDefinition used to instruct Sveltos which add-ons to deploy on a set of clusters.
+Profile is a namespace-scoped resource. It can only match clusters and reference resources within its own namespace.
+clusterSelector field selects a set of managed clusters where listed add-ons and applications will be deployed. +Only cluster in the same namespace can be a match.
+ +helmCharts field consists of a list of helm charts to be deployed to the clusters matching clusterSelector;
+ helmCharts:
+ - repositoryURL: https://kyverno.github.io/kyverno/
+ repositoryName: kyverno
+ chartName: kyverno/kyverno
+ chartVersion: v3.0.1
+ releaseName: kyverno-latest
+ releaseNamespace: kyverno
+ helmChartAction: Install
+
policyRefs field references a list of ConfigMaps/Secrets, each containing Kubernetes resources to be deployed in the clusters matching clusterSelector.
+This field is a slice of PolicyRef structs. Each PolictRef has the following fields:
+policyRefs:
+- kind: Secret
+ name: my-secret-1
+ namespace: my-namespace-1
+ deploymentType: Local
+- kind: Remote
+ name: my-configmap-1
+ namespace: my-namespace-1
+ deploymentType: Remote
+
kustomizationRefs field is a list of sources containing kustomization files. Resources will be deployed in the clusters matching the clusterSelector specified.
+This field is a slice of KustomizationRef structs. Each KustomizationRef has the following fields:
+Kind: The kind of the referenced resource. The supported kinds are:
+Namespace: The namespace of the resource being referenced. This field is automatically set to the namespace of the Profile instance. In other words, a Profile instance can only reference resources that are within its own namespace.
+This field can be set to:
+Let's take a closer look at the OneTime syncMode option. Once you deploy a Profile with a OneTime configuration, Sveltos will check all of your clusters for a match with the clusterSelector. Any matching clusters will have the resources specified in the Profile deployed. However, if you make changes to the Profile later on, those changes will not be automatically deployed to already-matching clusters.
+Now, if you're looking for real-time deployment and updates, the Continuous syncMode is the way to go. With Continuous, any changes made to the Profile will be immediately reconciled into matching clusters. This means that you can add new features, update existing ones, and remove them as necessary, all without lifting a finger. Sveltos will deploy, update, or remove resources in matching clusters as needed, making your life as a Kubernetes admin a breeze.
+ContinuousWithDriftDetection instructs Sveltos to monitor the state of managed clusters and detect a configuration drift for any of the resources deployed because of that Profile. +When Sveltos detects a configuration drift, it automatically re-syncs the cluster state back to the state described in the management cluster. +To know more about configuration drift detection, refer to this section.
+Imagine you're about to make some important changes to your Profile, but you're not entirely sure what the results will be. You don't want to risk causing any unwanted side effects, right? Well, that's where the DryRun syncMode configuration comes in. By deploying your Profile with this configuration, you can launch a simulation of all the operations that would normally be executed in a live run. The best part? No actual changes will be made to the matching clusters during this dry run workflow, so you can rest easy knowing that there won't be any surprises. +To know more about dry run, refer to this section.
+The stopMatchingBehavior field specifies the behavior when a cluster no longer matches a Profile. By default, all Kubernetes resources and Helm charts deployed to the cluster will be removed. However, if StopMatchingBehavior is set to LeavePolicies, any policies deployed by the Profile will remain in the cluster.
+For instance
+apiVersion: config.projectsveltos.io/v1alpha1
+kind: Profile
+metadata:
+ name: kyverno
+ namespace: eng
+spec:
+ stopMatchingBehavior: WithdrawPolicies
+ clusterSelector: env=prod
+ helmCharts:
+ - repositoryURL: https://kyverno.github.io/kyverno/
+ repositoryName: kyverno
+ chartName: kyverno/kyverno
+ chartVersion: v3.0.1
+ releaseName: kyverno-latest
+ releaseNamespace: kyverno
+ helmChartAction: Install
+
When a cluster matches the Profile, Kyverno Helm chart will be deployed in such a cluster. If the cluster's labels are subsequently modified and cluster no longer matches the Profile, the Kyverno Helm chart will be uninstalled. However, if the stopMatchingBehavior property is set to LeavePolicies, Sveltos will retain the Kyverno Helm chart in the cluster.
+The reloader property determines whether rolling upgrades should be triggered for Deployment, StatefulSet, or DaemonSet instances managed by Sveltos and associated with this Profile when changes are made to mounted ConfigMaps or Secrets. +When set to true, Sveltos automatically initiates rolling upgrades for affected Deployment, StatefulSet, or DaemonSet instances whenever any mounted ConfigMap or Secret is modified. This ensures that the latest configuration updates are applied to the respective workloads.
+Please refer to this section for more information.
+A Profile might match more than one cluster. When a change is maded to a Profile, by default all matching clusters are update concurrently. +The maxUpdate field specifies the maximum number of Clusters that can be updated concurrently during an update operation triggered by changes to the Profile's add-ons or applications. +The specified value can be an absolute number (e.g., 5) or a percentage of the desired cluster count (e.g., 10%). The default value is 100%, allowing all matching Clusters to be updated simultaneously. +For instance, if set to 30%, when modifications are made to the Profile's add-ons or applications, only 30% of matching Clusters will be updated concurrently. Updates to the remaining matching Clusters will only commence upon successful completion of updates in the initially targeted Clusters. This approach ensures a controlled and manageable update process, minimizing potential disruptions to the overall cluster environment. +Please refer to this section for more information.
+The validateHealths property defines a set of Lua functions that Sveltos executes against the managed cluster to assess the health and status of the add-ons and applications specified in the Profile. These Lua functions act as validation checks, ensuring that the deployed add-ons and applications are functioning properly and aligned with the desired state. By executing these functions, Sveltos proactively identifies any potential issues or misconfigurations that could arise, maintaining the overall health and stability of the managed cluster.
+The ValidateHealths property accepts a slice of Lua functions, where each function encapsulates a specific validation check. These functions can access the managed cluster's state to perform comprehensive checks on the add-ons and applications. The results of the validation checks are aggregated and reported back to Sveltos, providing valuable insights into the health and status of the managed cluster's components.
+Lua's scripting capabilities offer flexibility in defining complex validation logic tailored to specific add-ons or applications.
+Please refer to this section for more information.
+Consider a scenario where a new cluster with the label env:prod is created. The following instructions guide Sveltos to:
+apiVersion: config.projectsveltos.io/v1alpha1
+kind: Profile
+metadata:
+ name: kyverno
+ namespace: eng
+spec:
+ clusterSelector: env=prod
+ helmCharts:
+ - repositoryURL: https://kyverno.github.io/kyverno/
+ repositoryName: kyverno
+ chartName: kyverno/kyverno
+ chartVersion: v3.0.1
+ releaseName: kyverno-latest
+ releaseNamespace: kyverno
+ helmChartAction: Install
+ validateHealths:
+ - name: deployment-health
+ featureID: Helm
+ group: "apps"
+ version: "v1"
+ kind: "Deployment"
+ namespace: kyverno
+ script: |
+ function evaluate()
+ hs = {}
+ hs.healthy = false
+ hs.message = "available replicas not matching requested replicas"
+ if obj.status ~= nil then
+ if obj.status.availableReplicas ~= nil then
+ if obj.status.availableReplicas == obj.spec.replicas then
+ hs.healthy = true
+ end
+ end
+ end
+ return hs
+ end
+
The templateResourceRefs property specifies a collection of resources to be gathered from the management cluster. The values extracted from these resources will be utilized to instantiate templates embedded within referenced PolicyRefs and Helm charts. +Refer to template section for more info and examples.
+The dependsOn property specifies a list of other Profiles that this instance relies on. In any managed cluster that matches to this Profile, the add-ons and applications defined in this instance will only be deployed after all add-ons and applications in the designated dependency Profiles have been successfully deployed.
+For example, profile-a can depend on another profile-b. This implies that any Helm charts or raw YAML files associated with CProfile A will not be deployed until all add-ons and applications specified in Profile B have been successfully provisioned.
+ + + + + + + + + + + + + + + + + +The ClusterProfile spec.policyRefs is a list of Secrets/ConfigMaps. Both Secrets and ConfigMaps data fields can be a list of key-value pairs. Any key is acceptable, and the value can be multiple objects in YAML or JSON format1.
+To create a Kubernetes Secret that contains the Calico YAMLs and make it usable with Sveltos, utilise the below commands.
+$ wget https://raw.githubusercontent.com/projectcalico/calico/master/manifests/calico.yaml
+
+$ kubectl create secret generic calico --from-file=calico.yaml --type=addons.projectsveltos.io/cluster-profile
+
The commands will download the calico.yaml manifest file and afterwards create a Kubernetes secret of type generic
by defining the file downloaded in the previous command plus defining the needed type=addons.projectsveltos.io/cluster-profile
.
Please note: A ClusterProfile can only reference Secrets of type addons.projectsveltos.io/cluster-profile
+The YAML definition below exemplifies a ConfigMap that holds multiple resources2. When a ClusterProfile instance references the ConfigMap, a Namespace
and a Deployment
instance are automatically deployed in any managed cluster that adheres to the ClusterProfile clusterSelector.
apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: nginx
+ namespace: default
+data:
+ namespace.yaml: |
+ kind: Namespace
+ apiVersion: v1
+ metadata:
+ name: nginx
+ deployment.yaml: |
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: nginx-deployment
+ namespace: nginx
+ spec:
+ replicas: 2 # number of pods to run
+ selector:
+ matchLabels:
+ app: nginx
+ template:
+ metadata:
+ labels:
+ app: nginx
+ spec:
+ containers:
+ - name: nginx
+ image: nginx:latest # public image from Docker Hub
+ ports:
+ - containerPort: 80
+
Once the required Kubernetes resources are created/deployed, the below example represents a ClusterProfile resource that references the ConfigMap and the Secret created above.
+apiVersion: config.projectsveltos.io/v1alpha1
+kind: ClusterProfile
+metadata:
+ name: deploy-resources
+spec:
+ clusterSelector: env=fv
+ policyRefs:
+ - name: nginx
+ namespace: default
+ kind: ConfigMap
+ - name: calico
+ namespace: default
+ kind: Secret
+
Note: The namespace
definition refers to the namespace where the ConfigMap, and the Secret were created in the management cluster. In our example, both resources created in the default
namespace.
When a ClusterProfile references a ConfigMap or a Secret, the kind and name fields are required, while the namespace field is optional. Specifying a namespace uniquely identifies the resource using the tuple namespace, name, and kind, and that resource will be used for all matching clusters.
+If you leave the namespace field empty, Sveltos will search for the ConfigMap or the Secret with the provided name within the namespace of each matching cluster.
+apiVersion: config.projectsveltos.io/v1alpha1
+kind: ClusterProfile
+metadata:
+ name: deploy-resources
+spec:
+ clusterSelector: env=fv
+ policyRefs:
+ - name: nginx
+ kind: ConfigMap
+
Consider the provided ClusterProfile, when we have two workload clusters matching. One in the foo namespace and another in the bar namespace. Sveltos will search for the ConfigMap nginx in the foo namespace for the Cluster in the foo namespace and for a ConfigMap ngix in the bar namespace for the Cluster in the bar namespace.
+More ClusterProfile examples can be found here.
+Remember to adapt the provided resources to your specific repository structure, cluster configuration, and desired templating logic.
+A ConfigMap is not designed to hold large chunks of data. The data stored in a ConfigMap cannot exceed 1 MiB. If you need to store settings that are larger than this limit, you may want to consider mounting a volume or use a separate database or file service. ↩
+Another way to create a Kubernetes ConfigMap resource is with the imperative approach. The below command will create the same ConfigMap resource in the management cluster. +
↩ +A ClusterProfile might match more than one clusters. When adding or modifying a ClusterProfile, it is helpful to:
+To support this, Sveltos uses two ClusterProfile Spec
fields: MaxUpdate
and ValidateHealths
.
MaxUpdate
indicates the maximum number of clusters that can be updated concurrently. The value can be an absolute number (e.g., 5) or a percentage of the desired managed clusters (e.g., 10%). The default vlue is set to 100%.
When the field is set to 30%, the list of add-ons/applications in ClusterProfile changes, only 30% of the matching clusters will be updated in parallel. Only when the updates in these clusters succeed, it will proceed with the update of the remaining clusters
+The validateHealths
field in a ClusterProfile Spec allows you to specify health validation checks that Sveltos should perform before declaring an update successful. These checks are expressed using the Lua language.
For instance, when deploying Helm charts, it is possible to instruct Sveltos to check the deployments health (number of active replicas) before declaring the Helm chart deployment successful.
+validateHealths:
+- name: deployment-health
+ featureID: Helm
+ group: "apps"
+ version: "v1"
+ kind: "Deployment"
+ namespace: kyverno
+ script: |
+ function evaluate()
+ hs = {}
+ hs.healthy = false
+ hs.message = "available replicas not matching requested replicas"
+ if obj.status ~= nil then
+ if obj.status.availableReplicas ~= nil then
+ if obj.status.availableReplicas == obj.spec.replicas then
+ hs.healthy = true
+ end
+ end
+ end
+ return hs
+ end
+
The above YAML definition instructs Sveltos to fetch all the deployments in the kyverno namespace. For each of those, the Lua script is evaluated.
+The Lua function must be named evaluate
. It is passed as a single argument, which is an instance of the object being validated (obj
). The function must return a struct containing a field healthy
, which is a boolean indicating whether the resource is healthy or not. The struct can also have an optional field message
, which will be reported back by Sveltos if the resource is not healthy.
A rolling update strategy allows you to update your clusters gradually, minimizing downtime and risk. By updating a few clusters at a time, you can identify and resolve any issues before rolling out the update to all of your clusters. Additionally, you can use the ValidateHealths field to ensure that your clusters are healthy before declaring the update successful.
+To use the rolling update strategy, simply set the MaxUpdate
field in the ClusterProfile Spec to the desired number of clusters to update concurrently. You can also use the ValidateHealths
field to specify any health validation checks that you want to perform.
The following ClusterProfile Spec would update a maximum of 30% of matching clusters concurrently, and would check that the number of active replicas for all deployments in the kyverno namespace matche the requested replicas before declaring the update successful.
+apiVersion: config.projectsveltos.io/v1alpha1
+kind: ClusterProfile
+metadata:
+ name: kyverno
+spec:
+ clusterSelector: env=fv
+ syncMode: Continuous
+ maxUpdate: 30%
+ helmCharts:
+ - repositoryURL: https://kyverno.github.io/kyverno/
+ repositoryName: kyverno
+ chartName: kyverno/kyverno
+ chartVersion: v3.0.1
+ releaseName: kyverno-latest
+ releaseNamespace: kyverno
+ helmChartAction: Install
+ values: |
+ admissionController:
+ replicas: 1
+ validateHealths:
+ - name: deployment-health
+ featureID: Helm
+ group: "apps"
+ version: "v1"
+ kind: "Deployment"
+ namespace: kyverno
+ script: |
+ function evaluate()
+ hs = {}
+ hs.healthy = false
+ hs.message = "available replicas not matching requested replicas"
+ if obj.status ~= nil then
+ if obj.status.availableReplicas ~= nil then
+ if obj.status.availableReplicas == obj.spec.replicas then
+ hs.healthy = true
+ end
+ end
+ end
+ return hs
+ end
+
To verify the Lua script without a cluster, you can follow steps pointers.
+controllers/health_policies/deployment_health directory
: cd controllers/health_policies/deployment_health
mkdir my_script
lua_policy.lua
in the directory you just created, and add your evaluate
function to it.valid_resource.yaml
in the same directory, and add a healthy resource to it. This is a resource that your evaluate function should evaluate to healthy.invalid_resource.yaml
in the same directory, and add a non-healthy resource to it. This is a resource that your evaluate function should evaluate to false.make ut
Please Note: If the unit tests pass, the Lua script is valid.
+ + + + + + + + + + + + + + + + +