-
Notifications
You must be signed in to change notification settings - Fork 185
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable specifying target size ratio for block, file and object pools #2979
base: main
Are you sure you want to change the base?
Enable specifying target size ratio for block, file and object pools #2979
Conversation
7b1b066
to
d1fe711
Compare
Although currently we support specifying only the target size ratio, this field can be used to support more fields in the future. Signed-off-by: Malay Kumar Parida <[email protected]>
d1fe711
to
c4bac83
Compare
/cc @iamniting @travisn |
//lint:ignore ST1017 required to compare it directly | ||
if "data" == poolType { | ||
crs.TargetSizeRatio = .49 | ||
if poolType == "data" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you pls create const variables for block
, file
etc and use them at all places?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed
var definedSize float64 | ||
|
||
switch storageType { | ||
case "block": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
dont we need a case for the nfs and the API changes as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Acc to the design doc discussion we don't need to expose for nfs block pool or replica-1 block pool.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
https://docs.google.com/document/d/1BOag-Hm9vpmYnpW76MstJXBdDCyAlYQf0rexpdKZnCA/edit?disco=AAABUdNdABs
But just to confirm again, @travisn do we need to
- expose a field for the nfs block pool?
- use the same value in the nfs blockpool as the default block pool?
- nfs block pool should stay at the default 0.49(current behaviour)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For NFS, are you referring to the .nfs
CephBlockPool? That is just a metadata pool, so it doesn't need the targetSizeRatio to be set. NFS uses a data pool from CephFS.
/test ocs-operator-bundle-e2e-aws |
Configuring the target size ratio enables Ceph to adjust PGs based on the anticipated usage of the pools. Currently all the dataPools (RBD/ CephFS/object) have a target_size_ratio of 0.49. Having same ratios for all data Pools causes under allocation of PGs for some pools & over allocation for others. According to the expected usage of the pools, the target size ratio can be set per pool. Signed-off-by: Malay Kumar Parida <[email protected]>
c4bac83
to
509637c
Compare
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: malayparida2000 The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@@ -199,7 +199,7 @@ func (o *ocsCephBlockPools) reconcileNFSCephBlockPool(r *StorageClusterReconcile | |||
cephBlockPool.Spec.PoolSpec.DeviceClass = storageCluster.Status.DefaultCephDeviceClass | |||
cephBlockPool.Spec.EnableCrushUpdates = true | |||
cephBlockPool.Spec.PoolSpec.FailureDomain = getFailureDomain(storageCluster) | |||
cephBlockPool.Spec.PoolSpec.Replicated = generateCephReplicatedSpec(storageCluster, "data") | |||
cephBlockPool.Spec.PoolSpec.Replicated = generateCephReplicatedSpec(storageCluster, poolTypeData, storageTypeBlock) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
dont we need a separate one here for nfs as it was earlier?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have asked Travis on what we need to do for nfs here
#2979 (comment)
|
||
switch storageType { | ||
case storageTypeBlock: | ||
definedRatio = initData.Spec.ManagedResources.CephBlockPools.PoolSpec.Replicated.TargetSizeRatio |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, the PoolSpec.TargetSizeRatio was deprecated in rook a long time ago, but we have kept it for backward compatibility. Instead, let's use the parameters:
definedRatio = initData.Spec.ManagedResources.CephBlockPools.PoolSpec.Replicated.TargetSizeRatio | |
definedRatio = initData.Spec.ManagedResources.CephBlockPools.PoolSpec.Parameters["target_size_ratio"] |
Ref-https://issues.redhat.com/browse/RHSTOR-5690
Configuring the target size ratio enables Ceph to adjust PGs based on the anticipated usage of the pools. Currently all the dataPools (RBD/CephFS/object) have a target_size_ratio of 0.49. Having same ratios for all data Pools causes under allocation of PGs for some pools & over allocation for others. According to the expected usage of the pools, the target size ratio can be set per pool.