Replies: 1 comment
-
Hi, Yes, this is possible, but your samples from a whole volume won't be coherent as they'll be lots of patches stitched together. People do it if they're interested in partially noising and reconstruction for e.g. anomaly detection, see here. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Is it currently possible to use the MONAI transforms
RandSpatialCrop
orRandCropByPosNegLabel
to train diffusion model from MONAI GenerativeModels? I can understand one can train using this transform on the training set (so we don't have to downsample the images that leads to image feature degradation), but then how will one apply the diffusion model during validation/inference? And which validation metric should be tracked (does MSE still make sense if I use the RandCrop transforms on validation set but the crop in each epoch from the same image is going to be different)?Beta Was this translation helpful? Give feedback.
All reactions