Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow token to come from Kubernetes Secret rather than plaintext in Helm values #14

Open
pwhack opened this issue Feb 7, 2025 · 0 comments · May be fixed by #15
Open

Allow token to come from Kubernetes Secret rather than plaintext in Helm values #14

pwhack opened this issue Feb 7, 2025 · 0 comments · May be fixed by #15

Comments

@pwhack
Copy link

pwhack commented Feb 7, 2025

Hello,

Currently (v0.2.0), the chart requires us to specify the value of config.token as a plaintext string, which is a very insecure requirement for organizations using GitOps and CD tools to deploy charts to clusters. We are forbidden by our InfoSec team (and I agree with them) from ever committing tokens, passwords, secrets, private certs, and any other sensitive value in Git. There are more secure storage places that should hold these sensitive values than Git.

For Kubernetes deployments, we opt for Kubernetes Secrets as our first choice for workloads to pull sensitive values from. I think this is the most flexible for your customers too. Please adapt the chart to allow consuming the config.token from a Kubernetes Secret. Ideally the Deployment object would mount the Secret into the pod filesystem rather than presenting as an environment variable since the sensitive environment variable value would be visible in pod information. But presenting as an environment variable pulled from a Secret is still a much better step in the right direction.

For now, we used the following strategy to take your Helm chart, render the raw Kubernetes manifests, modify the ConfigMap to no longer contain the token key, and modify the Deployment to load the Secret as an environment variable. This worked and didn't require any code changes inside the container.

helm template hd-agent \
  oci://us-docker.pkg.dev/prod-eng-fivetran-ldp/public-docker-us/helm/hybrid-deployment-agent \
  --namespace datalake-fivetran \
  --set config.data_volume_pvc=fivetran-agent-pvc \
  --set config.token="leave-this-value-as-is" \
  --version 0.2.0 \
  > plaintext-workaround.yaml

yq --inplace eval 'del(select(.kind == "ConfigMap" and .metadata.name == "hd-agent-config") | .data.token)' plaintext-workaround.yaml

yq --inplace '(select(.kind == "Deployment" and .metadata.name == "hd-agent") | .spec.template.spec.containers[] | select(.name == "hd-agent") | .env) += [{"name":"token","valueFrom":{"secretKeyRef":{"name":"fivetran-hybrid-agent-token","key":"agent-token"}}}]' plaintext-workaround.yaml

The resulting ConfigMap object looks like this:

apiVersion: v1
kind: ConfigMap
metadata:
  name: hd-agent-config
  namespace: fivetran-hybrid-agent
  labels:
    app.kubernetes.io/name: hd-agent-config
    app.kubernetes.io/app: hd-agent
    app.kubernetes.io/part-of: hybrid-deployment
data:
  data_volume_pvc: "fivetran-agent-pvc"

And the Deployment object looks like this (truncated for brevity):

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hd-agent
  namespace: fivetran-hybrid-agent
...
spec:
  ...
  template:
  ...
    spec:
    ...
    containers:
      - name: hd-agent
        ...
          env:
            - name: container_env_type
              value: KUBERNETES
            - name: profile
              value: kubernetes
            - name: namespace
              value: fivetran-hybrid-agent
            - name: release_name
              value: hd-agent
            - name: token
              valueFrom:
                secretKeyRef:
                  name: fivetran-hybrid-agent-token
                  key: agent-token
          envFrom:
            - configMapRef:
                name: hd-agent-config

We then deploy plaintext-workaround.yaml via Argo CD for now until the Helm chart is modified to use a Kubernetes Secret.

For some other background context, we use Terraform to manage a fivetran_hybrid_deployment_agent resource and then use the resource attributes to store the generated token in our cloud secrets manager. This way the token is never stored outside secure locations (our Terraform backend uses encrypted storage as well). Then we use External Secrets Operator to replicate the secrets manager value to Kubernetes. This pattern is pretty flexible for us across a variety of vendors and adds some resiliency via store-and-forward in rare times the cloud provider has an outage of the secret manager API.

Thanks for considering this request.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
1 participant