This folder contains code for applying PIIP on semantic segmentation, developed on top of MMSegmentation v0.27.0.
The released model weights are provided in the parent folder.
Please refer to installation of object detection.
Then link the ops
and pretrained
directories to this folder:
ln -s ../mmdetection/ops .
ln -s ../mmdetection/pretrained .
Note: the core model code is under mmseg/models/backbones/
.
To train PIIP-H6B UperNet on ADE20K on a single node with 8 gpus for 12 epochs run:
sh tools/dist_train.sh configs/piip/2branch/upernet_internvit_h6b_256_512_80k_ade20k_bs16_lr4e-5.py 8
# or manage jobs with slurm
GPUS=8 sh tools/slurm_train.sh <partition> <job-name> configs/piip/2branch/upernet_internvit_h6b_256_512_80k_ade20k_bs16_lr4e-5.py
To evaluate PIIP-H6B UperNet on ADE20K on a single node with a single gpu:
# w/ deepspeed
python tools/test.py configs/piip/2branch/upernet_internvit_h6b_256_512_80k_ade20k_bs16_lr4e-5.py work_dirs/upernet_internvit_h6b_256_512_80k_ade20k_bs16_lr4e-5/iter_80000/global_step80000 --eval mIoU
# w/ deepspeed and set `deepspeed=False` in the configuration file
python tools/test.py configs/piip/2branch/upernet_internvit_h6b_256_512_80k_ade20k_bs16_lr4e-5.py work_dirs/upernet_internvit_h6b_256_512_80k_ade20k_bs16_lr4e-5/iter_80000/global_step80000/mp_rank_00_model_states.pt --eval mIoU
# w/o deepspeed
python tools/test.py configs/piip/2branch/upernet_internvit_h6b_256_512_80k_ade20k_bs16_lr4e-5.py work_dirs/upernet_internvit_h6b_256_512_80k_ade20k_bs16_lr4e-5/upernet_internvit_h6b_256_512_80k_ade20k_bs16_lr4e-5.pth --eval mIoU
# w/ deepspeed
sh tools/dist_test.sh configs/piip/2branch/upernet_internvit_h6b_256_512_80k_ade20k_bs16_lr4e-5.py work_dirs/upernet_internvit_h6b_256_512_80k_ade20k_bs16_lr4e-5/iter_80000/global_step80000 8 --eval mIoU
# w/ deepspeed and set `deepspeed=False` in the configuration file
sh tools/dist_test.sh configs/piip/2branch/upernet_internvit_h6b_256_512_80k_ade20k_bs16_lr4e-5.py work_dirs/upernet_internvit_h6b_256_512_80k_ade20k_bs16_lr4e-5/iter_80000/global_step80000/mp_rank_00_model_states.pt 8 --eval mIoU
# w/o deepspeed
sh tools/dist_test.sh configs/piip/2branch/upernet_internvit_h6b_256_512_80k_ade20k_bs16_lr4e-5.py work_dirs/upernet_internvit_h6b_256_512_80k_ade20k_bs16_lr4e-5/upernet_internvit_h6b_256_512_80k_ade20k_bs16_lr4e-5.pth 8 --eval mIoU
First download the pretrained checkpoints from here.
To use gradio for visualizing segmentation results (recommended, faster as model is loaded only once):
python visualize_seg_gradio.py --config_file configs/piip/2branch/upernet_internvit_h6b_512_512_80k_ade20k_bs16_lr4e-5.py --checkpoint_file work_dirs/upernet_internvit_h6b_512_512_80k_ade20k_bs16_lr4e-5/upernet_internvit_h6b_512_512_80k_ade20k_bs16_lr4e-5.pth
To use command line for visualization:
python visualize_seg.py --config_file configs/piip/2branch/upernet_internvit_h6b_512_512_80k_ade20k_bs16_lr4e-5.py --checkpoint_file work_dirs/upernet_internvit_h6b_512_512_80k_ade20k_bs16_lr4e-5/upernet_internvit_h6b_512_512_80k_ade20k_bs16_lr4e-5.pth --img_path demo/demo.png --out_path visualization.jpg
We provide a simple script to calculate the number of FLOPs. Change the config_list
in ../classification/get_flops.py
and run
# use the classification environment
cd ../classification/
python get_flops.py
Then the FLOPs and number of parameters are recorded in flops.txt
.
If you find this work helpful for your research, please consider giving this repo a star ⭐ and citing our paper:
@article{piip,
title={Parameter-Inverted Image Pyramid Networks},
author={Zhu, Xizhou and Yang, Xue and Wang, Zhaokai and Li, Hao and Dou, Wenhan and Ge, Junqi and Lu, Lewei and Qiao, Yu and Dai, Jifeng},
journal={arXiv preprint arXiv:2406.04330},
year={2024}
}