Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable specifying target size ratio for block, file and object pools #2979

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions api/v1/storagecluster_types.go
Original file line number Diff line number Diff line change
Expand Up @@ -246,6 +246,9 @@ type ManageCephBlockPools struct {
// +kubebuilder:validation:MaxLength=253
// +kubebuilder:validation:Pattern=^[a-z0-9]([-a-z0-9]*[a-z0-9])?(\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*$
VirtualizationStorageClassName string `json:"virtualizationStorageClassName,omitempty"`
// PoolSpec specifies the PoolSpec for cephBlockPool
// Currently only the target size ratio field is usable, other fields usability can be added later
PoolSpec rookCephv1.PoolSpec `json:"poolSpec,omitempty"`
}

// ManageCephNonResilientPools defines how to reconcile ceph non-resilient pools
Expand Down Expand Up @@ -294,6 +297,9 @@ type ManageCephObjectStores struct {
// +kubebuilder:validation:MaxLength=253
// +kubebuilder:validation:Pattern=^[a-z0-9]([-a-z0-9]*[a-z0-9])?(\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*$
StorageClassName string `json:"storageClassName,omitempty"`
// DataPoolSpec specifies the pool specification for the default cephObjectStore data pool
// Currently only the target size ratio field is usable, other fields usability can be added later
DataPoolSpec rookCephv1.PoolSpec `json:"dataPoolSpec,omitempty"`
}

// ManageCephObjectStoreUsers defines how to reconcile CephObjectStoreUsers
Expand Down
4 changes: 3 additions & 1 deletion api/v1/zz_generated.deepcopy.go

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

426 changes: 426 additions & 0 deletions config/crd/bases/ocs.openshift.io_storageclusters.yaml

Large diffs are not rendered by default.

6 changes: 3 additions & 3 deletions controllers/storagecluster/cephblockpools.go
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ func (o *ocsCephBlockPools) reconcileCephBlockPool(r *StorageClusterReconciler,
cephBlockPool.Spec.PoolSpec.DeviceClass = storageCluster.Status.DefaultCephDeviceClass
cephBlockPool.Spec.PoolSpec.EnableCrushUpdates = true
cephBlockPool.Spec.PoolSpec.FailureDomain = getFailureDomain(storageCluster)
cephBlockPool.Spec.PoolSpec.Replicated = generateCephReplicatedSpec(storageCluster, "data")
cephBlockPool.Spec.PoolSpec.Replicated = generateCephReplicatedSpec(storageCluster, poolTypeData, storageTypeBlock)
cephBlockPool.Spec.PoolSpec.EnableRBDStats = true

// Since provider mode handles mirroring, we only need to handle for internal mode
Expand Down Expand Up @@ -151,7 +151,7 @@ func (o *ocsCephBlockPools) reconcileMgrCephBlockPool(r *StorageClusterReconcile
cephBlockPool.Spec.PoolSpec.DeviceClass = storageCluster.Status.DefaultCephDeviceClass
cephBlockPool.Spec.PoolSpec.EnableCrushUpdates = true
cephBlockPool.Spec.PoolSpec.FailureDomain = getFailureDomain(storageCluster)
cephBlockPool.Spec.PoolSpec.Replicated = generateCephReplicatedSpec(storageCluster, "metadata")
cephBlockPool.Spec.PoolSpec.Replicated = generateCephReplicatedSpec(storageCluster, poolTypeMetaData, storageTypeBlock)
util.AddLabel(cephBlockPool, util.ForbidMirroringLabel, "true")

return controllerutil.SetControllerReference(storageCluster, cephBlockPool, r.Scheme)
Expand Down Expand Up @@ -199,7 +199,7 @@ func (o *ocsCephBlockPools) reconcileNFSCephBlockPool(r *StorageClusterReconcile
cephBlockPool.Spec.PoolSpec.DeviceClass = storageCluster.Status.DefaultCephDeviceClass
cephBlockPool.Spec.EnableCrushUpdates = true
cephBlockPool.Spec.PoolSpec.FailureDomain = getFailureDomain(storageCluster)
cephBlockPool.Spec.PoolSpec.Replicated = generateCephReplicatedSpec(storageCluster, "data")
cephBlockPool.Spec.PoolSpec.Replicated = generateCephReplicatedSpec(storageCluster, poolTypeData, storageTypeBlock)
Copy link
Member

@iamniting iamniting Jan 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dont we need a separate one here for nfs as it was earlier?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have asked Travis on what we need to do for nfs here
#2979 (comment)

cephBlockPool.Spec.PoolSpec.EnableRBDStats = true
util.AddLabel(cephBlockPool, util.ForbidMirroringLabel, "true")

Expand Down
4 changes: 2 additions & 2 deletions controllers/storagecluster/cephblockpools_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -157,7 +157,7 @@ func assertCephBlockPools(t *testing.T, reconciler StorageClusterReconciler, cr
DeviceClass: cr.Status.DefaultCephDeviceClass,
EnableCrushUpdates: true,
FailureDomain: getFailureDomain(cr),
Replicated: generateCephReplicatedSpec(cr, "data"),
Replicated: generateCephReplicatedSpec(cr, poolTypeData, storageTypeBlock),
EnableRBDStats: true,
},
},
Expand Down Expand Up @@ -204,7 +204,7 @@ func assertCephNFSBlockPool(t *testing.T, reconciler StorageClusterReconciler, c
DeviceClass: cr.Status.DefaultCephDeviceClass,
EnableCrushUpdates: true,
FailureDomain: getFailureDomain(cr),
Replicated: generateCephReplicatedSpec(cr, "data"),
Replicated: generateCephReplicatedSpec(cr, poolTypeData, storageTypeBlock),
EnableRBDStats: true,
},
Name: ".nfs",
Expand Down
8 changes: 8 additions & 0 deletions controllers/storagecluster/cephcluster.go
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,14 @@ const (
diskSpeedFast diskSpeed = "fast"
)

const (
poolTypeData = "data"
poolTypeMetaData = "metadata"
storageTypeBlock = "block"
storageTypeFile = "file"
storageTypeObject = "object"
)

type knownDiskType struct {
speed diskSpeed
provisioner StorageClassProvisionerType
Expand Down
4 changes: 2 additions & 2 deletions controllers/storagecluster/cephfilesystem.go
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ func (r *StorageClusterReconciler) newCephFilesystemInstances(initStorageCluster
Spec: cephv1.FilesystemSpec{
MetadataPool: cephv1.NamedPoolSpec{
PoolSpec: cephv1.PoolSpec{
Replicated: generateCephReplicatedSpec(initStorageCluster, "metadata"),
Replicated: generateCephReplicatedSpec(initStorageCluster, poolTypeMetaData, storageTypeFile),
FailureDomain: initStorageCluster.Status.FailureDomain,
}},
MetadataServer: cephv1.MetadataServerSpec{
Expand Down Expand Up @@ -288,7 +288,7 @@ func generateDefaultPoolSpec(sc *ocsv1.StorageCluster) cephv1.PoolSpec {
return cephv1.PoolSpec{
DeviceClass: sc.Status.DefaultCephDeviceClass,
EnableCrushUpdates: true,
Replicated: generateCephReplicatedSpec(sc, "data"),
Replicated: generateCephReplicatedSpec(sc, poolTypeData, storageTypeFile),
FailureDomain: sc.Status.FailureDomain,
}
}
4 changes: 2 additions & 2 deletions controllers/storagecluster/cephobjectstores.go
Original file line number Diff line number Diff line change
Expand Up @@ -172,13 +172,13 @@ func (r *StorageClusterReconciler) newCephObjectStoreInstances(initData *ocsv1.S
DeviceClass: initData.Status.DefaultCephDeviceClass,
EnableCrushUpdates: true,
FailureDomain: initData.Status.FailureDomain,
Replicated: generateCephReplicatedSpec(initData, "data"),
Replicated: generateCephReplicatedSpec(initData, poolTypeData, storageTypeObject),
},
MetadataPool: cephv1.PoolSpec{
DeviceClass: initData.Status.DefaultCephDeviceClass,
EnableCrushUpdates: true,
FailureDomain: initData.Status.FailureDomain,
Replicated: generateCephReplicatedSpec(initData, "metadata"),
Replicated: generateCephReplicatedSpec(initData, poolTypeMetaData, storageTypeObject),
},
Gateway: cephv1.GatewaySpec{
Port: 80,
Expand Down
29 changes: 25 additions & 4 deletions controllers/storagecluster/generate.go
Original file line number Diff line number Diff line change
Expand Up @@ -136,19 +136,40 @@ func generateNameForCephRbdMirror(initData *ocsv1.StorageCluster) string {

// generateCephReplicatedSpec returns the ReplicatedSpec for the cephCluster
// based on the StorageCluster configuration
func generateCephReplicatedSpec(initData *ocsv1.StorageCluster, poolType string) cephv1.ReplicatedSpec {
func generateCephReplicatedSpec(initData *ocsv1.StorageCluster, poolType string, storageType string) cephv1.ReplicatedSpec {
crs := cephv1.ReplicatedSpec{}

crs.Size = getCephPoolReplicatedSize(initData)
crs.ReplicasPerFailureDomain = uint(getReplicasPerFailureDomain(initData))
//lint:ignore ST1017 required to compare it directly
if "data" == poolType {
crs.TargetSizeRatio = .49
if poolType == poolTypeData {
crs.TargetSizeRatio = getTargetSizeRatio(initData, storageType)
}

return crs
}

func getTargetSizeRatio(initData *ocsv1.StorageCluster, storageType string) float64 {
defaultTargetSizeRatio := 0.49
var definedRatio float64

switch storageType {
case storageTypeBlock:
definedRatio = initData.Spec.ManagedResources.CephBlockPools.PoolSpec.Replicated.TargetSizeRatio
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, the PoolSpec.TargetSizeRatio was deprecated in rook a long time ago, but we have kept it for backward compatibility. Instead, let's use the parameters:

Suggested change
definedRatio = initData.Spec.ManagedResources.CephBlockPools.PoolSpec.Replicated.TargetSizeRatio
definedRatio = initData.Spec.ManagedResources.CephBlockPools.PoolSpec.Parameters["target_size_ratio"]

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ack, @travisn Even though the field was deprecated there doesn't seem to any comment/kubebuilder warning in rook suggesting that. I think that can be added in rook APIs.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@travisn Another question if both Replicated.TargetSizeRatio & Parameters["target_size_ratio"] are specified then which one has higher priority?

case storageTypeFile:
definedRatio = initData.Spec.ManagedResources.CephFilesystems.DataPoolSpec.Replicated.TargetSizeRatio
case storageTypeObject:
definedRatio = initData.Spec.ManagedResources.CephObjectStores.DataPoolSpec.Replicated.TargetSizeRatio
default:
return defaultTargetSizeRatio // Return default for unexpected storageType
}

if definedRatio == 0.0 {
return defaultTargetSizeRatio // Apply default if not set
}

return definedRatio
}

// generateStorageQuotaName function generates a name for ClusterResourceQuota
func generateStorageQuotaName(storageClassName, quotaName string) string {
return fmt.Sprintf("%s-%s", storageClassName, quotaName)
Expand Down
Loading
Loading