From 36ca312a83bd2ff24f8c42d6f2bdc438045541f8 Mon Sep 17 00:00:00 2001 From: Marta Ranzini Date: Wed, 20 May 2020 11:16:16 +0100 Subject: [PATCH] action log on basic unet - needs update with ignite --- action_log.txt | 30 ++++++++++++++++++++++++++++++ 1 file changed, 30 insertions(+) create mode 100644 action_log.txt diff --git a/action_log.txt b/action_log.txt new file mode 100644 index 0000000..d0ab662 --- /dev/null +++ b/action_log.txt @@ -0,0 +1,30 @@ +1. Created a simple notebook to debug the data reading in MONAI +/mnt/data/mranzini/Desktop/GIFT-Surg/FBS_Monai/basic_unet_monai/notebooks/data_reader_debug.ipynb +This extracts the filenames from the splitting lists used for Retraining_with_expanded dataset and then tests the application of Resizing + Random Rotation and Random Flipping as Data Augmentation + +2. Prepared a basic unet_training.pycode +/mnt/data/mranzini/Desktop/GIFT-Surg/FBS_Monai/basic_unet_monai/python_code/unet_training.py +This is based on the 3D unet segmentation examples available in MONAI. Changes had to be made to be able to extract 2D patches from 3D images (e.g. creation of a Squeeze transform, modification of sliding window inference). Originally, the training loop was defined as plain pytorch, as it didn't seem possible to perform whole volume validation with ignite. I actually was wrong, it is possible, so eventually it would be good to refactor the code with ignite --> see segmentation_3D_ignite/unet_evaluation_dict.py + +3. Defined a training config file with basic inputs +/mnt/data/mranzini/Desktop/GIFT-Surg/FBS_Monai/basic_unet_monai/config/basic_unet_config.yml + +4. Ran training (using monai environment) +NOTE: I uninstalled monai (pip uninstall monai) as I want to link the source code myself so I can keep it updated instead of waiting for the pip install release to be updated. +This is achieved by adding: +sys.path.append("/mnt/data/mranzini/Desktop/GIFT-Surg/FBS_Monai/MONAI") +To run the training: +python /mnt/data/mranzini/Desktop/GIFT-Surg/FBS_Monai/basic_unet_monai/python_code/unet_training.py --config /mnt/data/mranzini/Desktop/GIFT-Surg/FBS_Monai/basic_unet_monai/config/basic_unet_config.yml + +5. Prepared inference scripts: +5a. Generated the lists for inference using: +/mnt/data/mranzini/Desktop/GIFT-Surg/Retraining_with_expanded_dataset/bash/01b_list_inference_files.sh +5b. Prepared the inference code +python /mnt/data/mranzini/Desktop/GIFT-Surg/FBS_Monai/basic_unet_monai/python_code/unet_testing.py +It pretty much follows the validation code, but I had to apply the resize within the inference loop, instead of initial transform, otherwise it is not possible to map the images back to their original size +5c. Added the same post-processing as Guotai's network (i.e. binary closing and get largest component) +5d. Prepared the config file for inference +/mnt/data/mranzini/Desktop/GIFT-Surg/FBS_Monai/basic_unet_monai/config/basic_unet_config_inference.yml + +6. Ran inference +python /mnt/data/mranzini/Desktop/GIFT-Surg/FBS_Monai/basic_unet_monai/python_code/unet_testing.py --config /mnt/data/mranzini/Desktop/GIFT-Surg/FBS_Monai/basic_unet_monai/config/basic_unet_inference_config.yml \ No newline at end of file