Skip to content

joaco18/stroke-seg-ssl

Repository files navigation

Self-supervised learning for acute ischemic stroke final infarct lesion segmentation in Non-Contrast CT

alt text

This repository contains all the code developed for obtaining the Master is Science degree from The Erasmus Mundus Joint Masters Program on Medical Image and Applications (MAIA) in a project conducted jointly with icometrix.

The full thesis manuscript can be checked here.

Main contributors:

  • Joaquin Oscar Seia
  • Ezequiel de la Rosa
  • Diana Sima
  • David Robben

General description

Copy thesis abstract

alt text

Repository structure

This repository is organized in the following way:

Using the code

In this repository we provide all the code used to train the models. However, a great part of this thesis involved the use of a private dataset (icoAIS) and a two datasets which required authorization to be downloaded and used (CENTER-TBI, APIS). As a consequence, the results are not fully reproducible, neither the data or the pretrained weights are shared.

In order to use the code provided here in one of the publicly available datasets or in one of your own:

  • Setting up the environments
  • Download and reorganize the image files
  • Preprocess the data
  • Generate the nnUNet datasets
  • Train nnUNet's encoder with SSL
  • Train nnUNet in supervised fashion

Set up the enviroments

(The usage of anaconda environments manager is assumed)

Two environments will be necessary. One for preprocessing the data and another one for running the SSL pretraining and the training of nnUNet. The reason for this is some icompatible versions between nnUNet and some of the models/code used during preprocessing.

First, update conda and change the solver to limamba for faster solving of requirements:

conda update -n base conda &&
conda install -n base conda-libmamba-solver &&
conda config --set solver libmamba &&
source ~/anaconda3/bin/activate

Create stroke environment and install requirements:

conda create -n stroke python==3.9 anaconda &&
conda activate stroke &&
conda install pytorch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 pytorch-cuda=11.6 -c pytorch -c nvidia &&
pip install -r requirements.txt

Create nnunet_gpu environment and install requirements:

conda create -n nnunet_gpu python==3.9 anaconda &&
conda activate nnunet_gpu &&
cd nnUNet &&
pip install -e .

Download and reorganize the image files

The only datasets that can be currently download is AISD.

Once you get access to the data. The files will be standarized in name and fileformat and the proper daset csv will be generated by running:

export PYTHONPATH="${PYTHONPATH}:[<PATH_TO_THIS_PROJECT>]"
python data/clean_datasets.py -sdp '<SOURCE_DATA_PATH>' -bdp '<BIDS_DATA_PATH>'

The code is provided to process APIS and CENTER TBI data in case you get access to it.

Preprocess the data

Once the data is organized in the standard format, the preprocessing can be run.

For each dataset a configuration file needs to be defined, examples are provided for the publicly available datasets in preprocessing/cfg_files

For some stages of the preprocessing you will need:

Once everything is downloaded and elastix is working preprocessing can be run by:

python 'preprocessing/preprocess_dataset.py' -ppcfg 'preprocessing/cfg_files/preprocessing_cfg_aisd.yml'

Generate the nnUNet datasets

This can be done following the nnUNet dataset jupyter notebook. Both the SSL and the supervised datasets should be obtained.

Train nnUNet's encoder with SSL

First adecaute the configuration file according to the SSL experiment you want to run.

Then the ssl training can be done by running:

bash nnUnet/ssl/run_ssl.sh

Train nnUNet in supervised fashion

This can be done by adapting accordingly and running:

bash nnUnet/train_nnunet_example.sh

Once the model is trained, the inferece can be obtained by using nnUNet's terminal command (check nnUNet documentation).

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published