This is the official implementation code for SMVmatching -- "Stereo Matching by Self-supervision of Multiscopic Vision". For technical details, please refer to :
Stereo Matching by Self-supervision of Multiscopic Vision
Weihao Yuan, Yazhan Zhang, Bingkun Wu, Siyu Zhu, Ping Tan, Michael Yu Wang, Qifeng Chen
IROS2021
[Paper] [Project Page]
If you find this code useful, please consider citing:
@inproceedings{yuan2021stereo,
title={Stereo Matching by Self-supervision of Multiscopic Vision},
author={Yuan, Weihao and Zhang, Yazhan and Wu, Bingkun and Zhu, Siyu and Tan, Ping and Wang, Michael Yu and Chen, Qifeng},
booktitle={Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
pages={},
year={2021},
organization={IEEE}
}
Dependencies:
- Python2.7
- PyTorch (0.4.0+)
- torchvision (0.2.0+)
The synthetic datasets are available at here. It should be organized as follows for the training:
dataset/
TRAIN/
SyntheData/
000001/
view0.png
view1.png
view2.png
TEST/
EVAL/
. run.sh
Results on KITTI 2015:
Model | Dataset | D1-all | D1-noc | D1-occ |
---|---|---|---|---|
Model1 | Train set | 4.23% | 4.09% | - |
Model2 | Train set | 3.82% | 3.69% | 10.87% |
Model2 | Online Test set | 4.43% | 4.20% | - |
Model1: model without probability uncertainty
Model2: model with probability uncertainty
Thanks to Jia-Ren Chang et al. for opening source of their excellent work PSMNet.
Licensed under an MIT license.