Skip to content

Latest commit

 

History

History
87 lines (64 loc) · 3.37 KB

README.md

File metadata and controls

87 lines (64 loc) · 3.37 KB

Stereo Matching by Self-supervision of Multiscopic Vision

This is the official implementation code for SMVmatching -- "Stereo Matching by Self-supervision of Multiscopic Vision". For technical details, please refer to :

Stereo Matching by Self-supervision of Multiscopic Vision
Weihao Yuan, Yazhan Zhang, Bingkun Wu, Siyu Zhu, Ping Tan, Michael Yu Wang, Qifeng Chen
IROS2021
[Paper] [Project Page]

camera framework

Bibtex

If you find this code useful, please consider citing:

@inproceedings{yuan2021stereo,
  title={Stereo Matching by Self-supervision of Multiscopic Vision},
  author={Yuan, Weihao and Zhang, Yazhan and Wu, Bingkun and Zhu, Siyu and Tan, Ping and Wang, Michael Yu and Chen, Qifeng},
  booktitle={Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
  pages={},
  year={2021},
  organization={IEEE}
}

Contents

  1. Environment Setup
  2. Dataset
  3. Training

Environment setup

Dependencies:

  • Python2.7
  • PyTorch (0.4.0+)
  • torchvision (0.2.0+)

Datasets:

The synthetic datasets are available at here. It should be organized as follows for the training:

dataset/
    TRAIN/
        SyntheData/
            000001/
                view0.png
                view1.png
                view2.png
    TEST/
    EVAL/

Training:

. run.sh

Pretrained models

Results on KITTI 2015:

Model Dataset D1-all D1-noc D1-occ
Model1 Train set 4.23% 4.09% -
Model2 Train set 3.82% 3.69% 10.87%
Model2 Online Test set 4.43% 4.20% -

Model1: model without probability uncertainty

Model2: model with probability uncertainty


Acknowledgements

Thanks to Jia-Ren Chang et al. for opening source of their excellent work PSMNet.

License

Licensed under an MIT license.