You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have made several segmentation modes using nnUnet and am incredibly pleased with how they work.
I have only been using label-based training, not region-based training (see https://github.com/MIC-DKFZ/nnUNet/blob/master/documentation/region_based_training.md). I am slightly doubtful about the aim of region-based training. Will region-based training and inference support delineations that are partly overlapping or only delineations that are subsets of each other? My current understanding is that it aims at subsets, although I initially believed it was aimed at partly overlapping structures.
In my case, I have two regions, A and B, which can overlap but are not necessarily subsets of each other. Thus, I assume that the network in nnUNet will need at least three label levels (e.g. 1=(A not B), 2= (B not A), and 3=(A and B)) for training. During the data preparation, I can change the A and B delineations to the three levels, but will region-based training support the process in any way? Finally, I would like to make an inference that results in delineations matching A and B, which might overlap in certain regions. But am I right that the inference will "only" output the three levels (thus nnUnet has a single label output per voxel), and then I need to combine them in a local post-processing, or can regions training support this in any way?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I have made several segmentation modes using nnUnet and am incredibly pleased with how they work.
I have only been using label-based training, not region-based training (see https://github.com/MIC-DKFZ/nnUNet/blob/master/documentation/region_based_training.md). I am slightly doubtful about the aim of region-based training. Will region-based training and inference support delineations that are partly overlapping or only delineations that are subsets of each other? My current understanding is that it aims at subsets, although I initially believed it was aimed at partly overlapping structures.
In my case, I have two regions, A and B, which can overlap but are not necessarily subsets of each other. Thus, I assume that the network in nnUNet will need at least three label levels (e.g. 1=(A not B), 2= (B not A), and 3=(A and B)) for training. During the data preparation, I can change the A and B delineations to the three levels, but will region-based training support the process in any way? Finally, I would like to make an inference that results in delineations matching A and B, which might overlap in certain regions. But am I right that the inference will "only" output the three levels (thus nnUnet has a single label output per voxel), and then I need to combine them in a local post-processing, or can regions training support this in any way?
Best,
Carsten
Beta Was this translation helpful? Give feedback.
All reactions