Replies: 4 comments 1 reply
-
You have to share your configuration, the logs from all the pods etc. Without it, it looks like your ZOoKeeper does not work, but not much more can be said. |
Beta Was this translation helpful? Give feedback.
-
To me it looks more like the strimzi operator is in a non-working state as restarting it resolves the issue and enables the operator to communicate with zookeeper. |
Beta Was this translation helpful? Give feedback.
-
I think I may be seeing the same issue, occasionaly in our openshift knative CI , currently with strimzi 0.40.0 See the "kafka-" logs in The strimzi operator install, Kafka and KafkaUser resources are created in https://github.com/openshift-knative/serverless-operator/blob/main/hack/lib/strimzi.bash |
Beta Was this translation helpful? Give feedback.
-
I encountered the same issue above when running a scaled down kafka/zookeeper deployment within a I found the coredns fix from here kubernetes-sigs/kind#3713 (comment) which appears to be getting fixed elsewhere too |
Beta Was this translation helpful? Give feedback.
-
Bug Description
I am currently installing Kafka nightly on three stages on AKS and 95% of the time it is working as expected, about 5% of the time the install fails with the following situation:
If I restart the cluster operator in this situation, everything starts working as expected and the Kafka pods are starting.
Steps to reproduce
No response
Expected behavior
No response
Strimzi version
0.38.0
Kubernetes version
Kubernetes 1.26.10
Installation method
Helm chart
Infrastructure
Azure AKS
Configuration files and logs
No response
Additional context
No response
Beta Was this translation helpful? Give feedback.
All reactions