Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove pool parameter from the groupsnapshot/groupreplicationclass parameters #5082

Open
Madhu-1 opened this issue Jan 16, 2025 · 5 comments
Labels
component/rbd Issues related to RBD dependency/ceph depends on core Ceph functionality

Comments

@Madhu-1
Copy link
Collaborator

Madhu-1 commented Jan 16, 2025

Describe the feature you'd like to have

Currently, cephcsi expects the pool parameter to be set in groupsnapshot and groupreplication classes, having the dependency on the pool creates a problem when the pool is deleted when there are no pvc/volumesnapshot in it. CephCSI should be dynamically storing the omap data in the required pool/namespace

What is the value to the end user? (why is it a priority?)

No need to pass the pool parameter in the Class where doesn't a need to ensure omap existence before removing the pool.

How would the end user gain value from having this feature?

More flexibility

The same need to be done for the cluster where we have multiple filesystems for cephfs (This can be tracked as a different request)

@nixpanic nixpanic added the component/rbd Issues related to RBD label Jan 16, 2025
@nixpanic
Copy link
Member

@Madhu-1 , what pool do you suggest to use when grouping RBD Volumes that are in different pools?

@Madhu-1
Copy link
Collaborator Author

Madhu-1 commented Jan 16, 2025

IMO we should be storing it on both the pools as a group is dynamic in volumegroup is dynamic and the major problem I can think of having a single source for omap we might end up not cleaning the complete group when the pool hosting the omap is deleted (same applicable for volume group as well) and the pool need to be long-lived even though its not hosting any images but just because of the metadata.

PS: I haven't thought about the design.

@nixpanic
Copy link
Member

This isn't trivial for the RBD-group metadata itself, it is maintained by librbd. Duplicating that metadata to all pools that have images in the group sounds nice though, I just don't know if the RBD-group API's allow such a thing at the moment.

@nixpanic nixpanic added the dependency/ceph depends on core Ceph functionality label Jan 16, 2025
@Madhu-1
Copy link
Collaborator Author

Madhu-1 commented Jan 16, 2025

@nixpanic i was talking only about the omap metadata that cephcsi stores for mapping. am not sure about the metadata you are referring to, is it the same or something else?

@Madhu-1
Copy link
Collaborator Author

Madhu-1 commented Jan 16, 2025

As I mentioned I haven't thought about the design but I thought it would be good to have a discussion about it and see if we can remove the hard dependency when we have multiple pools in the clusters that are used by cephcsi.

can cephcsi also choose pool from one of the image which are meant to be grouped and use its pool for metadata rather than depending on the user to provide pool to mention where the group should gets created and where cephcsi should store the omap, at least we can make it kind of optional?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/rbd Issues related to RBD dependency/ceph depends on core Ceph functionality
Projects
None yet
Development

No branches or pull requests

2 participants