-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: unable to upgrade to 3.9.0: Could not upgrade metadata.version to 21 #11039
Comments
You probably scaled the Kafka cluster down in the past with an older Strimzi version and Kafka still has this node registered but invisible because of missing APIs. This is not a Strimzi bug but a Kafka KRaft limitation. It should be addressed only in Kafka 4.0. You have to work around it manually by unregistering the node using the Kafka Admin API. |
@scholzj Thanks, that could be it. It seems the command line tools to list and unregister nodes aren't available with 3.9.0, is this correct? |
There was no command line tool for it. But I'm not sure I ever checked in Kafka 3.9 - maybe someone added it there in that version. You could also try to scale-up (add the node reported in the error message) and scale it down again. New Strimzi versions try to workaround this Kafka limitation and should unregister the node. But if it was controller, the scaling is tricky as that is another unsupported thing :-/. You can also try to add it to the |
Triaged on 23.1.2025: it seems the |
Not sure I did the right thing, but it complains that the "given broker ID was not registered" (tried with id >= 3).
|
How did you get it was the right broker id when you scaled down? |
0, 1, 2 are the current broker/controller combos, and at some point there was 0, 1, 2 as brokers and 3, 4, 5 that were controller only IIRC. At least I assumed that those were the ids needed for kafka-cluster.sh unregister. |
So you scaled down controllers which is something not really supported by KRaft right now. The quorum is static and dynamic quorum (with controllers to be scaled down), will come with Kafka 4.x. So I guess this is the reason why the unregister doesn't work, because it's for brokers.
Also can you describe the steps you did to go from brokers 0,1,2 (in one nodepool) and controllers 3,4,5 (in another nodepool) to brokers/controllers 0,1,2? I could try to replicate what you had. |
@ppatierno If you mean "You can also try to add it to the .status.registeredNodeIds list in the Kafka CR with kubectl edit kafka my-cluster --subresource=status to trigger the unregistration." - I did try that (by adding 3,4,5 to .status.registeredNodeIds which contained 0,1,2). It had no effect and only 0,1,2 remained. It was a little while ago, but I recall doing this:
|
Bug Description
I have upgraded operator from 0.44.0 to 0.45.0 and then edited the Kafka CRD to change spec.kafka.version from 3.8.0 to 3.9.0. The pods recreated with new image but the upgrade did not complete. Now the cluster got this status:
I tried manual upgrade:
Steps to reproduce
No response
Expected behavior
No response
Strimzi version
0.45.0
Kubernetes version
1.27.11
Installation method
Helm
Infrastructure
Bare-metal
Configuration files and logs
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: