This is the official implement for paper Dual-Refinement: Joint Label and Feature Refinement for Unsupervised Domain Adaptive Person Re-Identification, which has been published in IEEE Transactions on Image Processing (TIP 2021). [paper][arXiv]
- 4 TITAN Xp / GTX 1080ti GPUs
- Python 3.7
- Pytorch 1.1.0
git clone https://github.com/SikaStar/Dual-Refinement.git
cd Dual-Refinement/
pip install -r requirements.txt
cd example && mkdir data
You can download Market1501, DukeMTMC-ReID, MSMT17 from GoogleDrive or BaiduYun (passward: t8rm) and unzip them under the directory like
Dual-Refinement/examples/data
├── dukemtmc
│ └── DukeMTMC-reID
├── market1501
│ └── Market-1501-v15.09.15
└── msmt17
└── MSMT17_V1
We use 4 GPUs for training.
sh scripts/pretrain.sh dukemtmc market1501 1
sh scripts/pretrain.sh market1501 dukemtmc 1
You can directly download the trained models from GoogleDrive or BaiduYun (passward: jryq) and move them to the directory like
Dual-Refinement/logs
├── dukemtmcTOmarket1501
└── market1501TOdukemtmc
Train with the model without the off-line label refinement and the on-line spread-out regularization.
sh scripts/train_baseline.sh dukemtmc market1501 1
sh scripts/train_baseline.sh market1501 dukemtmc 1
sh scripts/train_dual_refinement_duke2market.sh
sh scripts/train_dual_refinement_market2duke.sh
We use only a GPU for testing.
sh scripts/test.sh <TARGET> <MODEL_PATH>
For exampl, when training on DukeMTMC-ReID and directly testing on Market1501 (lower bound on DukeMTMC-ReIDMarket1501):
sh scripts/test.sh market1501 ./logs/dukemtmcTOmarket1501/source-pretrain-1/model_best.pth.tar
You can download the trained models from GoogleDrive or BaiduYun (passward: jryq) and move them to the directory like
Dual-Refinement/logs
├── dukemtmcTOmarket1501
└── market1501TOdukemtmc
You can achieve the results of Table 2 in this paper:
For example, when evaluating the method Baseline with both LR and IM-SP on DukeMTMC-ReIDMarket1501, you can run the commond like:
sh scripts/test.sh market1501 ./logs/dukemtmcTOmarket1501/Baseline_LR_IM-SP/model_best.pth.tar
Our code is based on open-reid and MMT.
If you find our work is useful for your research, please kindly cite our paper.
@article{dai2021dual,
title={Dual-Refinement: Joint Label and Feature Refinement for Unsupervised Domain Adaptive Person Re-Identification},
author={Dai, Yongxing and Liu, Jun and Bai, Yan and Tong, Zekun and Duan, Ling-Yu},
journal={IEEE Transactions on Image Processing},
year={2021}
}
If you have any questions, please leave an issue or contact me: [email protected]