Less number of slices in the training data. #2991
Unanswered
ramchandracheke
asked this question in
Q&A
Replies: 1 comment 3 replies
-
train_transforms = Compose(
[
LoadImaged(keys=["image", "label"]),
AddChanneld(keys=["image", "label"]),
ScaleIntensityRanged(
keys=["image"], a_min=-200, a_max=300,
b_min=0.0, b_max=1.0, clip=True,
),
SpatialPadd(keys=["image", "label"],spatial_size=(128,128,64),method= 'symmetric'),
RandCropByPosNegLabeld(
keys=["image", "label"],
label_key="label",
spatial_size=(128, 128, 64),
pos=1,
neg=1,
num_samples=4,
image_key="image",
image_threshold=0,
),
ToTensord(keys=["image", "label"]),
]
) |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi,
I am training a model with input 128x128x64, However, few samples have less than 64 slices in the training. Can you please suggest how to handle this problem.
Thanks,
Ram
Beta Was this translation helpful? Give feedback.
All reactions