You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Mar 3, 2022. It is now read-only.
There are many cases where cStor Volume has to be increased. For example, capacity might be completely filled up and there by application pod will be in crashloopbackoff state of running state based on the liveness probe in the application. Another scenario is before starting more load on this volume, it can also expand the capacity of the volume to make sure the uninterrupted running of the application.
The following are the prerequisites and steps to be performed to expand the cStor volume.
Prerequisite
Associated all cStor Pool Pods should be in running state. Verify by using kubectl get pod -n <openebs_installed_namespace>
Disable any snapshot schedule running on this volume.
Ensure all CVR which are associated to this cStor Volume are healthy . This can be check using the following command.
kubectl get cvr -n openebs
Verify the number of user created snapshots are present in all associated pools are same.
This can be done by using following way..
First exec into the associated pool pod by command.
Then do zfs list and use the corresponding dataset name in the following command to get the current size of the volume.
zfs get volsize <datasetname>
Now update the volume size of the dataset using the following command.
zfs set volsize=<expanded_size> <datasetname>
Verify the size is reflected properly by using the following command.
zfs get volsize <datasetname>
Note: Repeat this step on all associated pool pods.
Step2: Update the LUN size in istgt.conf by following procedure.
First exec into the cstor-istgt container which is running inside of cStor target pod. The corresponding cStor target pod can be get using kubectl get pod -n openebs | grep <pv name>
Then go to cd /usr/local/etc/istgt/
You should edit istgt.conf file. To edit use, vi editor. To install vi editor, use apt-get install vim -y. After vi editor is installed, edit the istgt.conf file to update the LUN0 Storage filed under [LogicalUnit1] section. Verify the TargetName under the Logical section and update the below field with new size and save it. LUN0 Storage <expanded_szie> 32k
Sample output.
[LogicalUnit1]
TargetName pvc-803985e7-879e-11e9-836c-42010a8000b1
TargetAlias nicknamefor-pvc-803985e7-879e-11e9-836c-42010a8000b1
Mapping PortalGroup1 InitiatorGroup1
AuthMethod None
AuthGroup None
UseDigest Auto
ReadOnly No
ReplicationFactor 3
ConsistencyFactor 2
UnitType Disk
UnitOnline Yes
BlockLength 512
QueueDepth 32
Luworkers 6
UnitInquiry "OpenEBS" "iscsi" "0" "80648584-879e-11e9-836c-42010a8000b1"
PhysRecordLength 4096
LUN0 Storage 12G 32k # update the new size here and save it
LUN0 Option Unmap Disable
LUN0 Option WZero Disable
LUN0 Option ATS Disable
LUN0 Option XCOPY Disable
Then kill the currently running istgt process. This can be done by below way,
Find out the istgt process using ps -auxwww| grep istgt. The following is a sample output.
Here istgt process pid is 7. So kill this process using kill <istgt_pid>. In this case kill 7. This will restart corresponding cStor target pod. Now you will be out of this target pod session.
Step 3: Go to the node where application is running and check the current size of the volume using lsblk. Then do re-scan of iscsi using following command
sudo iscsiadm -m node -R
Step 4: Verify if the new size is reflected using lsblk command on the node where application pod is running.
Step 5: Resize the filesystem in the same node using the following command.
sudo resize2fs /dev/<device>
For example:
If your openebs volume is mounted on /dev/sdc then use sudo resize2fs /dev/sdc.
Now you are almost done with the expansion. Step 6: If application is in crashloopbackoff state, try to restart the application pod. Verify if the size inside the application pod.
You can exec into the application pod and you can perform a df -h command to verify the siz e of the mount point. Step 7: If everything is successful edit/patch the respective cstorvolume and PersistentVolume PV with update size.
To get cstorvolume: kubectl get cstorvolume -n openebs
To get persistentvolume: kubectl get pv
The text was updated successfully, but these errors were encountered:
There are many cases where cStor Volume has to be increased. For example, capacity might be completely filled up and there by application pod will be in
crashloopbackoff
state ofrunning
state based on the liveness probe in the application. Another scenario is before starting more load on this volume, it can also expand the capacity of the volume to make sure the uninterrupted running of the application.The following are the prerequisites and steps to be performed to expand the cStor volume.
Prerequisite
kubectl get pod -n <openebs_installed_namespace>
healthy
. This can be check using the following command.This can be done by using following way..
First exec into the associated pool pod by command.
zfs list
and use the corresponding dataset name in the following command.no datasets available
.Overview
Once the above prerequisites are met, follow the steps to expand the cStor Volume.
Step1: Update the size of the corresponding volume in all the associated pool pods.
First exec into the associated pool pod by command.
Then do
zfs list
and use the corresponding dataset name in the following command to get the current size of the volume.Now update the volume size of the dataset using the following command.
Verify the size is reflected properly by using the following command.
Note: Repeat this step on all associated pool pods.
Step2: Update the LUN size in
istgt.conf
by following procedure.First exec into the
cstor-istgt
container which is running inside ofcStor target pod
. The corresponding cStor target pod can be get usingkubectl get pod -n openebs | grep <pv name>
Then go to
cd /usr/local/etc/istgt/
You should edit
istgt.conf
file. To edit use, vi editor. To install vi editor, useapt-get install vim -y
. After vi editor is installed, edit theistgt.conf
file to update theLUN0 Storage
filed under[LogicalUnit1]
section. Verify the TargetName under the Logical section and update the below field with new size and save it.LUN0 Storage <expanded_szie> 32k
Sample output.
Then kill the currently running istgt process. This can be done by below way,
Find out the istgt process using
ps -auxwww| grep istgt
. The following is a sample output.Here istgt process pid is 7. So kill this process using
kill <istgt_pid>
. In this casekill 7
. This will restart corresponding cStor target pod. Now you will be out of this target pod session.Step 3: Go to the node where application is running and check the current size of the volume using
lsblk
. Then do re-scan of iscsi using following commandStep 4: Verify if the new size is reflected using
lsblk
command on the node where application pod is running.Step 5: Resize the filesystem in the same node using the following command.
For example:
If your openebs volume is mounted on
/dev/sdc
then usesudo resize2fs /dev/sdc
.Now you are almost done with the expansion.
Step 6: If application is in
crashloopbackoff
state, try to restart the application pod. Verify if the size inside the application pod.You can exec into the application pod and you can perform a
df -h
command to verify the siz e of the mount point.Step 7: If everything is successful edit/patch the respective cstorvolume and PersistentVolume PV with update size.
To get cstorvolume: kubectl get cstorvolume -n openebs
To get persistentvolume: kubectl get pv
The text was updated successfully, but these errors were encountered: