It's the official code of 'One-shot Weakly-Supervised Segmentation in Medical Images'
Deep neural networks typically require accurate and a large number of annotations to achieve outstanding performance in medical image segmentation. One-shot and weakly-supervised learning are promising research directions that reduce labeling effort by learning a new class from only one annotated image and using coarse labels instead, respectively. In this work, we present an innovative framework for 3D medical image segmentation with one-shot and weakly-supervised settings. Firstly a propagation-reconstruction network is proposed to propagate scribbles from one annotated volume to unlabeled 3D images based on the assumption that anatomical patterns in different human bodies are similar. Then a multi-level similarity denoising module is designed to refine the scribbles based on embeddings from anatomical- to pixel-level. After expanding the scribbles to pseudo masks, we observe the miss-classified voxels mainly occur at the border region and propose to extract self-support prototypes for the specific refinement. Based on these weakly-supervised segmentation results, we further train a segmentation model for the new class with the noisy label training strategy. Experiments on one abdomen and one head-and-neck CT dataset show the proposed method obtains significant improvement over the state-of-the-art methods and performs robustly even under severe class imbalance and low contrast.
Pytorch >= 1.4, SimpleITK >= 1.2, scipy >= 1.3.1, nibabel >= 2.5.0, GeodisTK and some common packages.
You could download the processed dataset from: StructSeg task1 (Organ-at-risk segmentation from head & neck CT scans): BaiDu Yun or Google Drive into data/
and unzip them. For TCIA-Pancreas, please cite the original paper (Deeporgan: Multi-level deep convolutional networks for automated pancreas segmentation).
Prepare your data in data/Your_Data_Name/
. The data format should be like:
data/Your_Data_Name/
├── train
│ ├── 1
│ ├── rimage.nii.gz
│ ├── rlabel.nii.gz
│ ├── 2
│ ├── ...
├── valid
│ ├── n
│ ├── rimage.nii.gz
│ ├── rlabel.nii.gz
│ ├── ...
└── test
├── N
├── rimage.nii.gz
├── rlabel.nii.gz
├── ...
Actually, you can customize the names of your images and labels. Just record their pathes in the corresponding txt files in config/data/Your_Data_Name
. You can refer to the files in config/data/TCIA/
as an example.
The pretrained model for PRNet and Unet is avaliable here. Just place them in the weights/
.
- To train the PRNet, run
bash train_prnet.sh
.
- Run
bash test_coarseg_seg.sh
for coarse segmentation.
It will generate a coarse segmentation file namedcoarseg.nii.gz
in each scan folder.
- Run
bash train_plc.sh
- Run
bash test_fine_seg.sh
.
It will generate a coarse segmentation file namedfineseg.nii.gz
in each scan folder.