You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently (v0.2.0), the chart requires us to specify the value of config.token as a plaintext string, which is a very insecure requirement for organizations using GitOps and CD tools to deploy charts to clusters. We are forbidden by our InfoSec team (and I agree with them) from ever committing tokens, passwords, secrets, private certs, and any other sensitive value in Git. There are more secure storage places that should hold these sensitive values than Git.
For Kubernetes deployments, we opt for Kubernetes Secrets as our first choice for workloads to pull sensitive values from. I think this is the most flexible for your customers too. Please adapt the chart to allow consuming the config.token from a Kubernetes Secret. Ideally the Deployment object would mount the Secret into the pod filesystem rather than presenting as an environment variable since the sensitive environment variable value would be visible in pod information. But presenting as an environment variable pulled from a Secret is still a much better step in the right direction.
For now, we used the following strategy to take your Helm chart, render the raw Kubernetes manifests, modify the ConfigMap to no longer contain the token key, and modify the Deployment to load the Secret as an environment variable. This worked and didn't require any code changes inside the container.
We then deploy plaintext-workaround.yaml via Argo CD for now until the Helm chart is modified to use a Kubernetes Secret.
For some other background context, we use Terraform to manage a fivetran_hybrid_deployment_agent resource and then use the resource attributes to store the generated token in our cloud secrets manager. This way the token is never stored outside secure locations (our Terraform backend uses encrypted storage as well). Then we use External Secrets Operator to replicate the secrets manager value to Kubernetes. This pattern is pretty flexible for us across a variety of vendors and adds some resiliency via store-and-forward in rare times the cloud provider has an outage of the secret manager API.
Thanks for considering this request.
The text was updated successfully, but these errors were encountered:
Hello,
Currently (v0.2.0), the chart requires us to specify the value of
config.token
as a plaintext string, which is a very insecure requirement for organizations using GitOps and CD tools to deploy charts to clusters. We are forbidden by our InfoSec team (and I agree with them) from ever committing tokens, passwords, secrets, private certs, and any other sensitive value in Git. There are more secure storage places that should hold these sensitive values than Git.For Kubernetes deployments, we opt for Kubernetes Secrets as our first choice for workloads to pull sensitive values from. I think this is the most flexible for your customers too. Please adapt the chart to allow consuming the
config.token
from a Kubernetes Secret. Ideally theDeployment
object would mount the Secret into the pod filesystem rather than presenting as an environment variable since the sensitive environment variable value would be visible in pod information. But presenting as an environment variable pulled from a Secret is still a much better step in the right direction.For now, we used the following strategy to take your Helm chart, render the raw Kubernetes manifests, modify the ConfigMap to no longer contain the
token
key, and modify the Deployment to load the Secret as an environment variable. This worked and didn't require any code changes inside the container.The resulting ConfigMap object looks like this:
And the Deployment object looks like this (truncated for brevity):
We then deploy
plaintext-workaround.yaml
via Argo CD for now until the Helm chart is modified to use a Kubernetes Secret.For some other background context, we use Terraform to manage a
fivetran_hybrid_deployment_agent
resource and then use the resource attributes to store the generated token in our cloud secrets manager. This way the token is never stored outside secure locations (our Terraform backend uses encrypted storage as well). Then we use External Secrets Operator to replicate the secrets manager value to Kubernetes. This pattern is pretty flexible for us across a variety of vendors and adds some resiliency via store-and-forward in rare times the cloud provider has an outage of the secret manager API.Thanks for considering this request.
The text was updated successfully, but these errors were encountered: