Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable specifying target size ratio for block, file and object pools #2979

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

malayparida2000
Copy link
Contributor

@malayparida2000 malayparida2000 commented Jan 28, 2025

Ref-https://issues.redhat.com/browse/RHSTOR-5690

Configuring the target size ratio enables Ceph to adjust PGs based on the anticipated usage of the pools. Currently all the dataPools (RBD/CephFS/object) have a target_size_ratio of 0.49. Having same ratios for all data Pools causes under allocation of PGs for some pools & over allocation for others. According to the expected usage of the pools, the target size ratio can be set per pool.

@malayparida2000 malayparida2000 force-pushed the target_size_ratio branch 2 times, most recently from 7b1b066 to d1fe711 Compare January 29, 2025 12:56
Although currently we support specifying only the target size ratio,
this field can be used to support more fields in the future.

Signed-off-by: Malay Kumar Parida <[email protected]>
@malayparida2000
Copy link
Contributor Author

/cc @iamniting @travisn

@openshift-ci openshift-ci bot requested review from iamniting and travisn January 29, 2025 13:16
//lint:ignore ST1017 required to compare it directly
if "data" == poolType {
crs.TargetSizeRatio = .49
if poolType == "data" {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you pls create const variables for block, file etc and use them at all places?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed

var definedSize float64

switch storageType {
case "block":
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dont we need a case for the nfs and the API changes as well?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Acc to the design doc discussion we don't need to expose for nfs block pool or replica-1 block pool.

Copy link
Contributor Author

@malayparida2000 malayparida2000 Jan 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https://docs.google.com/document/d/1BOag-Hm9vpmYnpW76MstJXBdDCyAlYQf0rexpdKZnCA/edit?disco=AAABUdNdABs
But just to confirm again, @travisn do we need to

  1. expose a field for the nfs block pool?
  2. use the same value in the nfs blockpool as the default block pool?
  3. nfs block pool should stay at the default 0.49(current behaviour)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For NFS, are you referring to the .nfs CephBlockPool? That is just a metadata pool, so it doesn't need the targetSizeRatio to be set. NFS uses a data pool from CephFS.

@iamniting
Copy link
Member

/test ocs-operator-bundle-e2e-aws

Configuring the target size ratio enables Ceph to adjust PGs based on
the anticipated usage of the pools. Currently all the dataPools (RBD/
CephFS/object) have a target_size_ratio of 0.49. Having same ratios for
all data Pools causes under allocation of PGs for some pools & over
allocation for others. According to the expected usage of the pools,
the target size ratio can be set per pool.

Signed-off-by: Malay Kumar Parida <[email protected]>
Copy link
Contributor

openshift-ci bot commented Jan 30, 2025

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: malayparida2000
Once this PR has been reviewed and has the lgtm label, please ask for approval from iamniting. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@@ -199,7 +199,7 @@ func (o *ocsCephBlockPools) reconcileNFSCephBlockPool(r *StorageClusterReconcile
cephBlockPool.Spec.PoolSpec.DeviceClass = storageCluster.Status.DefaultCephDeviceClass
cephBlockPool.Spec.EnableCrushUpdates = true
cephBlockPool.Spec.PoolSpec.FailureDomain = getFailureDomain(storageCluster)
cephBlockPool.Spec.PoolSpec.Replicated = generateCephReplicatedSpec(storageCluster, "data")
cephBlockPool.Spec.PoolSpec.Replicated = generateCephReplicatedSpec(storageCluster, poolTypeData, storageTypeBlock)
Copy link
Member

@iamniting iamniting Jan 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dont we need a separate one here for nfs as it was earlier?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have asked Travis on what we need to do for nfs here
#2979 (comment)


switch storageType {
case storageTypeBlock:
definedRatio = initData.Spec.ManagedResources.CephBlockPools.PoolSpec.Replicated.TargetSizeRatio
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, the PoolSpec.TargetSizeRatio was deprecated in rook a long time ago, but we have kept it for backward compatibility. Instead, let's use the parameters:

Suggested change
definedRatio = initData.Spec.ManagedResources.CephBlockPools.PoolSpec.Replicated.TargetSizeRatio
definedRatio = initData.Spec.ManagedResources.CephBlockPools.PoolSpec.Parameters["target_size_ratio"]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants