forked from pjrambo/VINS-Fusion-gpu
-
Notifications
You must be signed in to change notification settings - Fork 12
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
17 changed files
with
184 additions
and
251 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,172 +1,84 @@ | ||
# VINS-Fusion-gpu | ||
This repository is a version of VINS-Fusion with GPU acceleration. It can run on Nvidia TX2 in real-time. | ||
## 1. Prerequisites | ||
The essential software environment is same as VINS-Fusion. Besides, it requires OpenCV cuda version.(Only test it on OpenCV 3.4.1). | ||
## 2. Usage | ||
### 2.1 Change the opencv path in the CMakeLists | ||
In /vins_estimator/CMakeLists.txt, change Line 20 to your path. | ||
In /loop_fusion/CmakeLists.txt, change Line 19 to your path. | ||
### 2.2 Change the acceleration parameters as you need. | ||
In the config file, there are two parameters for gpu acceleration. | ||
use_gpu: 0 for off, 1 for on | ||
use_gpu_acc_flow: 0 for off, 1 for on | ||
If your GPU resources is limitted or you want to use GPU for other computaion. You can set | ||
use_gpu: 1 | ||
use_gpu_acc_flow: 0 | ||
If your other application do not require much GPU resources, I recommanded you to set | ||
use_gpu: 1 | ||
use_gpu_acc_flow: 1 | ||
According to my test, on TX2 if you set this two parameters to 1 at the same time, the GPU usage is about 20%. | ||
# About | ||
This is fork from [VINS-Fusion-gpu](https://github.com/pjrambo/VINS-Fusion-gpu) for opencv 4 version (and some fix). | ||
VINS-Fusion-gpu is a version of [VINS-Fusion](https://github.com/HKUST-Aerial-Robotics/VINS-Fusion) with GPU acceleration. | ||
|
||
# VINS-Fusion | ||
## An optimization-based multi-sensor state estimator | ||
|
||
<img src="https://github.com/HKUST-Aerial-Robotics/VINS-Fusion/blob/master/support_files/image/vins_logo.png" width = 55% height = 55% div align=left /> | ||
<img src="https://github.com/HKUST-Aerial-Robotics/VINS-Fusion/blob/master/support_files/image/kitti.png" width = 34% height = 34% div align=center /> | ||
# OpenCV bridge | ||
This is require ROS and OpenCV bridge for OpenCV 4. | ||
|
||
VINS-Fusion is an optimization-based multi-sensor state estimator, which achieves accurate self-localization for autonomous applications (drones, cars, and AR/VR). VINS-Fusion is an extension of [VINS-Mono](https://github.com/HKUST-Aerial-Robotics/VINS-Mono), which supports multiple visual-inertial sensor types (mono camera + IMU, stereo cameras + IMU, even stereo cameras only). We also show a toy example of fusing VINS with GPS. | ||
**Features:** | ||
- multiple sensors support (stereo cameras / mono camera+IMU / stereo cameras+IMU) | ||
- online spatial calibration (transformation between camera and IMU) | ||
- online temporal calibration (time offset between camera and IMU) | ||
- visual loop closure | ||
|
||
<img src="https://github.com/HKUST-Aerial-Robotics/VINS-Fusion/blob/master/support_files/image/kitti_rank.png" width = 80% height = 80% /> | ||
|
||
We are the **top** open-sourced stereo algorithm on [KITTI Odometry Benchmark](http://www.cvlibs.net/datasets/kitti/eval_odometry.php) (12.Jan.2019). | ||
|
||
**Authors:** [Tong Qin](http://www.qintonguav.com), Shaozu Cao, Jie Pan, [Peiliang Li](https://peiliangli.github.io/), and [Shaojie Shen](http://www.ece.ust.hk/ece.php/profile/facultydetail/eeshaojie) from the [Aerial Robotics Group](http://uav.ust.hk/), [HKUST](https://www.ust.hk/) | ||
|
||
**Videos:** | ||
|
||
<a href="https://www.youtube.com/embed/1qye82aW7nI" target="_blank"><img src="http://img.youtube.com/vi/1qye82aW7nI/0.jpg" | ||
alt="VINS" width="320" height="240" border="10" /></a> | ||
|
||
|
||
**Related Papers:** (papers are not exactly same with code) | ||
* **A General Optimization-based Framework for Local Odometry Estimation with Multiple Sensors**, Tong Qin, Jie Pan, Shaozu Cao, Shaojie Shen, aiXiv [pdf](https://arxiv.org/abs/1901.03638) | ||
|
||
* **A General Optimization-based Framework for Global Pose Estimation with Multiple Sensors**, Tong Qin, Shaozu Cao, Jie Pan, Shaojie Shen, aiXiv [pdf](https://arxiv.org/abs/1901.03642) | ||
|
||
* **Online Temporal Calibration for Monocular Visual-Inertial Systems**, Tong Qin, Shaojie Shen, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS, 2018), **best student paper award** [pdf](https://ieeexplore.ieee.org/abstract/document/8593603) | ||
|
||
* **VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator**, Tong Qin, Peiliang Li, Shaojie Shen, IEEE Transactions on Robotics [pdf](https://ieeexplore.ieee.org/document/8421746/?arnumber=8421746&source=authoralert) | ||
``` | ||
cd ~/catkin_ws/src | ||
git clone https://github.com/ros-perception/vision_opencv | ||
``` | ||
|
||
In CMakeLists.txt change python version (if necessary): | ||
``` | ||
nano vision_opencv/cv_bridge/CMakeLists.txt | ||
``` | ||
``` | ||
find_package(Boost REQUIRED python37) -> find_package(Boost REQUIRED python3) | ||
``` | ||
|
||
*If you use VINS-Fusion for your academic research, please cite our related papers. [bib](https://github.com/HKUST-Aerial-Robotics/VINS-Fusion/blob/master/support_files/paper_bib.txt)* | ||
In module.hpp add define NUMPY_IMPORT_ARRAY_RETVAL: | ||
``` | ||
nano vision_opencv/cv_bridge/src/module.hpp | ||
``` | ||
```cpp | ||
#include <numpy/ndarrayobject.h> | ||
#define NUMPY_IMPORT_ARRAY_RETVAL NULL | ||
``` | ||
## 1. Prerequisites | ||
### 1.1 **Ubuntu** and **ROS** | ||
Ubuntu 64-bit 16.04 or 18.04. | ||
ROS Kinetic or Melodic. [ROS Installation](http://wiki.ros.org/ROS/Installation) | ||
Build OpenCV bridge: | ||
``` | ||
cd ~/catkin_ws | ||
catkin_make | ||
``` | ||
|
||
|
||
### 1.2. **Ceres Solver** | ||
Follow [Ceres Installation](http://ceres-solver.org/installation.html). | ||
# Build | ||
Build is same as VINS-Fusion-gpu and VINS-Fusion. | ||
|
||
``` | ||
sudo apt-get install ros-melodic-tf ros-melodic-image-transport | ||
``` | ||
|
||
## 2. Build VINS-Fusion | ||
Clone the repository and catkin_make: | ||
``` | ||
cd ~/catkin_ws/src | ||
git clone https://github.com/HKUST-Aerial-Robotics/VINS-Fusion.git | ||
git clone https://github.com/IOdissey/VINS-Fusion-GPU.git | ||
cd ../ | ||
catkin_make | ||
source ~/catkin_ws/devel/setup.bash | ||
``` | ||
(if you fail in this step, try to find another computer with clean system or reinstall Ubuntu and ROS) | ||
|
||
## 3. EuRoC Example | ||
Download [EuRoC MAV Dataset](http://projects.asl.ethz.ch/datasets/doku.php?id=kmavvisualinertialdatasets) to YOUR_DATASET_FOLDER. Take MH_01 for example, you can run VINS-Fusion with three sensor types (monocular camera + IMU, stereo cameras + IMU and stereo cameras). | ||
Open four terminals, run vins odometry, visual loop closure(optional), rviz and play the bag file respectively. | ||
Green path is VIO odometry; red path is odometry under visual loop closure. | ||
|
||
### 3.1 Monocualr camera + IMU | ||
# Usage | ||
In the config file, there are two parameters for gpu acceleration. | ||
|
||
GPU for GoodFeaturesToTrack (0 - off, 1 - on) | ||
``` | ||
roslaunch vins vins_rviz.launch | ||
rosrun vins vins_node ~/catkin_ws/src/VINS-Fusion/config/euroc/euroc_mono_imu_config.yaml | ||
(optional) rosrun loop_fusion loop_fusion_node ~/catkin_ws/src/VINS-Fusion/config/euroc/euroc_mono_imu_config.yaml | ||
rosbag play YOUR_DATASET_FOLDER/MH_01_easy.bag | ||
use_gpu: 1 | ||
``` | ||
|
||
### 3.2 Stereo cameras + IMU | ||
|
||
GPU for OpticalFlowPyrLK (0 - off, 1 - on) | ||
``` | ||
roslaunch vins vins_rviz.launch | ||
rosrun vins vins_node ~/catkin_ws/src/VINS-Fusion/config/euroc/euroc_stereo_imu_config.yaml | ||
(optional) rosrun loop_fusion loop_fusion_node ~/catkin_ws/src/VINS-Fusion/config/euroc/euroc_stereo_imu_config.yaml | ||
rosbag play YOUR_DATASET_FOLDER/MH_01_easy.bag | ||
use_gpu_acc_flow: 1 | ||
``` | ||
|
||
### 3.3 Stereo cameras | ||
|
||
If your GPU resources is limitted or you want to use GPU for other computaion. You can set | ||
``` | ||
roslaunch vins vins_rviz.launch | ||
rosrun vins vins_node ~/catkin_ws/src/VINS-Fusion/config/euroc/euroc_stereo_config.yaml | ||
(optional) rosrun loop_fusion loop_fusion_node ~/catkin_ws/src/VINS-Fusion/config/euroc/euroc_stereo_config.yaml | ||
rosbag play YOUR_DATASET_FOLDER/MH_01_easy.bag | ||
use_gpu: 1 | ||
use_gpu_acc_flow: 0 | ||
``` | ||
|
||
<img src="https://github.com/HKUST-Aerial-Robotics/VINS-Fusion/blob/master/support_files/image/euroc.gif" width = 430 height = 240 /> | ||
|
||
|
||
## 4. KITTI Example | ||
### 4.1 KITTI Odometry (Stereo) | ||
Download [KITTI Odometry dataset](http://www.cvlibs.net/datasets/kitti/eval_odometry.php) to YOUR_DATASET_FOLDER. Take sequences 00 for example, | ||
Open two terminals, run vins and rviz respectively. | ||
(We evaluated odometry on KITTI benchmark without loop closure funtion) | ||
``` | ||
roslaunch vins vins_rviz.launch | ||
(optional) rosrun loop_fusion loop_fusion_node ~/catkin_ws/src/VINS-Fusion/config/kitti_odom/kitti_config00-02.yaml | ||
rosrun vins kitti_odom_test ~/catkin_ws/src/VINS-Fusion/config/kitti_odom/kitti_config00-02.yaml YOUR_DATASET_FOLDER/sequences/00/ | ||
If your other application do not require much GPU resources (recommanded) | ||
``` | ||
### 4.2 KITTI GPS Fusion (Stereo + GPS) | ||
Download [KITTI raw dataset](http://www.cvlibs.net/datasets/kitti/raw_data.php) to YOUR_DATASET_FOLDER. Take [2011_10_03_drive_0027_synced](https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_10_03_drive_0027/2011_10_03_drive_0027_sync.zip) for example. | ||
Open three terminals, run vins, global fusion and rviz respectively. | ||
Green path is VIO odometry; blue path is odometry under GPS global fusion. | ||
use_gpu: 1 | ||
use_gpu_acc_flow: 1 | ||
``` | ||
roslaunch vins vins_rviz.launch | ||
rosrun vins kitti_gps_test ~/catkin_ws/src/VINS-Fusion/config/kitti_raw/kitti_10_03_config.yaml YOUR_DATASET_FOLDER/2011_10_03_drive_0027_sync/ | ||
rosrun global_fusion global_fusion_node | ||
``` | ||
|
||
<img src="https://github.com/HKUST-Aerial-Robotics/VINS-Fusion/blob/master/support_files/image/kitti.gif" width = 430 height = 240 /> | ||
|
||
## 5. VINS-Fusion on car demonstration | ||
Download [car bag](https://drive.google.com/open?id=10t9H1u8pMGDOI6Q2w2uezEq5Ib-Z8tLz) to YOUR_DATASET_FOLDER. | ||
Open four terminals, run vins odometry, visual loop closure(optional), rviz and play the bag file respectively. | ||
Green path is VIO odometry; red path is odometry under visual loop closure. | ||
``` | ||
roslaunch vins vins_rviz.launch | ||
rosrun vins vins_node ~/catkin_ws/src/VINS-Fusion/config/vi_car/vi_car.yaml | ||
(optional) rosrun loop_fusion loop_fusion_node ~/catkin_ws/src/VINS-Fusion/config/vi_car/vi_car.yaml | ||
rosbag play YOUR_DATASET_FOLDER/car.bag | ||
There are two parameters for config OpticalFlowPyrLK (pyramid level and window size): | ||
``` | ||
|
||
<img src="https://github.com/HKUST-Aerial-Robotics/VINS-Fusion/blob/master/support_files/image/car_gif.gif" width = 430 height = 240 /> | ||
|
||
|
||
## 6. Run with your devices | ||
VIO is not only a software algorithm, it heavily relies on hardware quality. For beginners, we recommend you to run VIO with professional equipment, which contains global shutter cameras and hardware synchronization. | ||
|
||
### 6.1 Configuration file | ||
Write a config file for your device. You can take config files of EuRoC and KITTI as the example. | ||
|
||
### 6.2 Camera calibration | ||
VINS-Fusion support several camera models (pinhole, mei, equidistant). You can use [camera model](https://github.com/hengli/camodocal) to calibrate your cameras. We put some example data under /camera_models/calibrationdata to tell you how to calibrate. | ||
``` | ||
cd ~/catkin_ws/src/VINS-Fusion/camera_models/camera_calib_example/ | ||
rosrun camera_models Calibrations -w 12 -h 8 -s 80 -i calibrationdata --camera-model pinhole | ||
``` | ||
|
||
|
||
## 7. Acknowledgements | ||
We use [ceres solver](http://ceres-solver.org/) for non-linear optimization and [DBoW2](https://github.com/dorian3d/DBoW2) for loop detection, a generic [camera model](https://github.com/hengli/camodocal) and [GeographicLib](https://geographiclib.sourceforge.io/). | ||
|
||
## 8. License | ||
The source code is released under [GPLv3](http://www.gnu.org/licenses/) license. | ||
|
||
We are still working on improving the code reliability. For any technical issues, please contact Tong Qin <qintonguavATgmail.com>. | ||
|
||
For commercial inquiries, please contact Shaojie Shen <eeshaojieATust.hk>. | ||
lk_n = 3 | ||
lk_size = 21 | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.