You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
I was evaluating the product, and during testing of deployment had many issues. I went forward and created a pr with all my findings
Describe the solution you'd like
Details about my finding:
With helm 3.14.2 There is no purge option. you defined it and when trying to delete it just gave error.
Flags:
--cascade string Must be "background", "orphan", or "foreground". Selects the deletion cascading strategy for the dependents. Defaults to background. (default "background")
--description string add a custom description
--dry-run simulate a uninstall
-h, --help help for uninstall
--ignore-not-found Treat "release not found" as a successful uninstall
--keep-history remove all associated resources and mark the release as deleted, but retain the release history
--no-hooks prevent hooks from running during uninstallation
--timeout duration time to wait for any individual Kubernetes operation (like Jobs for hooks) (default 5m0s)
--wait if set, will wait until all the resources are deleted before returning. It will wait for as long as --timeout
The ansible deploys a cluster version 1.17.3. I checked workload docs, and a lot of findings related to incorrect usage.
Deployments: They have no field called serviceName, those are only specific for Statefulsets.
Clickhouse: the initcounter was on wrong level, it should be under spec not metadata.labels
Elk: in service you define nodePort and targetPort for same port. you can only have one of them.
Etcd: with ansible a single node cluster is deployed. The nodeselector has 2 issues that is defined. it cannot have empty value.. either empty string like "" needs to be provided, or a boolean. since its single node cluster, the pod will never be deployable with that label, so it can be removed.
Ingress: extensions/v1bta1 is not valid, and networking.k8s.io/v1beta1 is the correct one. you can check api-version on cluster and it returns this one.
Ports, env: there are some indentations, they are not relevant, just was troubleshooting and was trying to understand it. I think a linter is needed, there were different practices used as i saw.
Ohsu: a port cant have all 4 options defined.. port/targetPort/cotnainerPort/nodePort. I don't really understand logic there.
Thanos Query: the livenessProbe and readiness probes were on wrong indentation, so they failed with invalid parameter
ready_probe: same here, the delay and period keys were onuner tcpSocket which is incorrect.
Regarding Pull images. I am not sure it is working as desired. i did a full pull images, and then when pods were starting they got rate limited by dockerhub, for to many pulls. I created a pull secret, and used pulling with it. I think it is better to use the built in mechanism to do it, then doing pulls and pushing to node. I understand the need for local built images, but i just wanted to bring up cluster and use it.
Describe alternatives you've considered
I have no alternatives, just providing feedback about my findings and solutions. They can be pulled in or issue/PR can be closed.
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
I was evaluating the product, and during testing of deployment had many issues. I went forward and created a pr with all my findings
Describe the solution you'd like
Details about my finding:
serviceName
, those are only specific for Statefulsets.spec
notmetadata.labels
Describe alternatives you've considered
I have no alternatives, just providing feedback about my findings and solutions. They can be pulled in or issue/PR can be closed.
The text was updated successfully, but these errors were encountered: