-
Notifications
You must be signed in to change notification settings - Fork 211
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixes issue#398 that decreasing replicas will make zookeeper unrecoverable when zookeeper not running. #399
Conversation
019b3ec
to
955f7dc
Compare
Codecov Report
@@ Coverage Diff @@
## master #399 +/- ##
==========================================
- Coverage 84.11% 84.04% -0.08%
==========================================
Files 12 12
Lines 1643 1667 +24
==========================================
+ Hits 1382 1401 +19
- Misses 177 185 +8
+ Partials 84 81 -3
Continue to review full report at Codecov.
|
if instance.Spec.Replicas == instance.Status.ReadyReplicas && (!instance.Status.MetaRootCreated) { | ||
// instance.Spec.Replicas is just an expected value that we set it, but it maybe not take effect by k8s. | ||
// So we should check that instance.Status.Replicas is equal to ReadyReplicas, which means true true status of pods. | ||
if instance.Status.Replicas == instance.Status.ReadyReplicas && (!instance.Status.MetaRootCreated) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we wont compare with spec.Replicas
status wont be shown correctly rt, in the case of scaling
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am afraid replacing instance.Spec.Replicas
with instance.Status.Replicas
won't work for initial ZK cluster deployment and scale up scenarios.
The very next line logging Cluster is Ready
would not be true if we only compare to the status.
Remember, when the cluster is initially created, the pods are created one by one.
If we adopt instance.Status.Replicas == instance.Status.ReadyReplicas
condition, then we will be moving to the next step after the first pod is created, not waiting for the rest of the replicas.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I get it.
This condition only take effect on scale up, so It won't affect shrinking.
Is it more appropriate that judging if the cluster is ready with the condition of
instance.Spec.Replicas == instance.Status.ReadyReplicas and instance.Spec.Replicas == instance.Status.Replicas ?
@stop-coding have one more question. How you made the service of zookeeper cluster to stop? will it be possible to add a test for the same |
…make cluster of zookeeper Unrecoverable Signed-off-by: hongchunhua <[email protected]>
…recoverable when zookeeper not running. Signed-off-by: hongchunhua <[email protected]>
Signed-off-by: hongchunhua <[email protected]>
Signed-off-by: hongchunhua <[email protected]>
9bdf061
to
34d265f
Compare
@anishakj |
72ac80d
to
10102fe
Compare
…ator into fix_issue_398 Signed-off-by: hongchunhua <[email protected]>
10102fe
to
a96373d
Compare
* add disableFinalizer flag and skip appending finalizers if set to true. Update charts Signed-off-by: Aaron <[email protected]> * set DisableFinalizer in main Signed-off-by: Aaron <[email protected]> * add README Signed-off-by: Aaron <[email protected]> * add UTs Signed-off-by: Aaron <[email protected]>
…ator into fix_issue_398
Change log description
Fixes the bug that decreasing replicas will make zookeeper unrecoverable when zookeeper not running.
Purpose of the change
Fixes #398
What the code does
Add protection for setting Stateful when zookeeper not running.
If zookeeper not running, we will prohibited to update replicas status until zookeeper resume.
When user decrease replicas value, it will remove node with reconfig firstly.
Keep do that remove node with reconfig on preStop before pod exit.
How to verify it
Is zk-0 is all right?