This repo is the official code for
-
Python 3.8.13, PyTorch = 1.11.0
-
Run the following commands in your terminal:
conda env create -f env.yml
conda activate pyt_env
-
Change the code in
config.py
line4: mode = 'train'
`line14: train_data_dir=''
`line15: test_data_dir=''
-
Run
python pusnet.py
- Trained models will be saved in 'model_zoo' folder.
-
Change the code in
config.py
line4: mode = 'test'
line36-41: test_pusnet_path = ''
-
Run
python pusnet.py
- Testing results will be saved in 'results' folder.
- Here, we provide trained models.
- We train the PUSNet on the DIV2K training dataset, and test it on three testing datasets, including the DIV2K test dataset, 1000 images randomly selected from the ImageNet test dataset
- The
batch_size
inconfig.py
should be at least2*number of gpus
and it should be divisible by number of gpus.
If you find our paper or code useful for your research, please cite:
@inproceedings{li2024purified,
title={Purified and Unified Steganographic Network},
author={Li, Guobiao and Li, Sheng and Luo, Zicong and Qian, Zhenxing and Zhang, Xinpeng},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={27569--27578},
year={2024}
}