Include unhealthy when retrieving all backends and remove "active" status #315
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description of your changes
This PR was originally intended to simply include unhealthy backends (see first two commits) when fetching from the backendstore. This is because a situation was happening where we were not iterating over unhealthy backends during an update, as such the CR status was being rewritten with the unhealthy backend omitted altogether - it is better that the unhealthy backend(s) are included in the status and PC can continue to attempt to reconcile the CR and wait for the backend(s) to recover (exponential back-off will safeguard against spamming requests in the event of a long-term outage).
This PR now has a second goal after further digging - removal of the concept of "active" backends. Previously a backend was marked as "inactive" if it's ProviderConfig was deleted. Should that ProviderConfig be recreated, the status would revert back to "active". This is unnecessarily complicated and makes the backendstore more difficult to manage. This PR uses a much simpler approach - Add/Delete the backend from the backendstore when a ProviderConfig is Created/Deleted respectively. This makes management of the store much easier and removes confusion between "active" and "health" statuses (there were a couple of places in the code where these two were conflated).
I have:
make reviewable
to ensure this PR is ready for review.make ceph-chainsaw
to validate these changes against Ceph. This step is not always necessary. However, for changes related to S3 calls it is sensible to validate against an actual Ceph cluster. Localstack is used in our CI Chainsaw suite for convenience and there can be disparity in S3 behaviours betwee it and Ceph. Seedocs/TESTING.md
for information on how to run tests against a Ceph cluster.backport release-x.y
labels to auto-backport this PR if necessary.How has this code been tested
Update UTs and passes existing CI