How can/should one apply normalization on the whole volume or at least bigger patches when using sliding window inference? #2887
Replies: 2 comments
-
Hi @neuronflow , Thanks for your interest here. Thanks. |
Beta Was this translation helpful? Give feedback.
-
You might have to code some test to ensure background only images are not normalized like other images would be. If you are normalizing individual patches then the value range of such a patch isn't necessarily similar to its neighbours when inputted to the network for inference, this would affect the results I'd expect as the network sees very different intensities for the same feature found in neighbouring patches. Normalizing the input as a whole and then sampling patches from this copy would give better and more predicable results, assuming you've trained your network on individually normalized patches. |
Beta Was this translation helpful? Give feedback.
-
For inference I usually build an inference data loader. When applying sliding window inference to the whole volume, I sometimes run into the problem that the sampled patches contain only background. The result is that normalisation (at the moment implemented through inference transforms for said data loader) "amplifies" the back ground noise signal and the network predictions become tainted.
Obviously I could fist generate a normalized version of my datasets and run inference on these, but it is there a more elegant way to achieve this, to avoid creating a normalized copy of your whole dataset?
Thanks for your assistance!
PS: This is somewhat related to: #2866
Beta Was this translation helpful? Give feedback.
All reactions