-
Notifications
You must be signed in to change notification settings - Fork 142
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reproduction of BraTs results #36
Comments
Thanks for reaching out. Genesis Chest CT was pre-trained by sub-volumes of 64x64x32, and we recommend to keep the same input shape in target tasks as well (we encountered degraded performance when using different shapes, e.g., 64x64x64 or 128x128x64). Also, for MRI images, normalization was done at the image level,
We're cleaning up the target task scripts so they will be released as jupyter notebooks shortly. Hope it helps. Zongwei |
Hi @MrGiovanni ! Thanks for your help, |
I would say sigmoid and softmax have no major difference if you use them correctly. Usually, for binary, multi-label classification/segmentation, I would use the sigmoid function in the last layer. You can find more explanation about sigmoid and softmax online, e.g., https://medium.com/arteos-ai/the-differences-between-sigmoid-and-softmax-activation-function-12adee8cf322#:~:text=Softmax%20is%20used%20for%20multi,in%20the%20Logistic%20Regression%20model.&text=This%20is%20similar%20to%20the,together%20all%20of%20the%20values. You might want to take a look at our recently released target task example at Thank you, |
Hi !
First, thank you for making your incredible work available. Your results are outstanding and the code is pretty easy to understand.
In order to use your work in another application that I am investigating I wanted first to replicate your results on the BraTs dataset.
I have downloaded and preprocessed the data : resizing the images to 64x64x64 and normalizing between 0 and 1.
However, when finetuning Genesis CT I don't manage to get anything close to your results.
Is there something I am missing ? The whole image has to be fed to the network or split in patches ? Also in the code example given in the keras folder num_classes is set to 2 for segmentation which I am not sure to understand as the convolution will create (2, 64, 64, 64) sized mask.
Thank you for your help,
Regards,
Camille
The text was updated successfully, but these errors were encountered: