diff --git a/README.md b/README.md
index 221b245..10a8989 100644
--- a/README.md
+++ b/README.md
@@ -1,172 +1,84 @@
-# VINS-Fusion-gpu
-This repository is a version of VINS-Fusion with GPU acceleration. It can run on Nvidia TX2 in real-time.
-## 1. Prerequisites
-The essential software environment is same as VINS-Fusion. Besides, it requires OpenCV cuda version.(Only test it on OpenCV 3.4.1).
-## 2. Usage
-### 2.1 Change the opencv path in the CMakeLists
-In /vins_estimator/CMakeLists.txt, change Line 20 to your path.
-In /loop_fusion/CmakeLists.txt, change Line 19 to your path.
-### 2.2 Change the acceleration parameters as you need.
-In the config file, there are two parameters for gpu acceleration.
-use_gpu: 0 for off, 1 for on
-use_gpu_acc_flow: 0 for off, 1 for on
-If your GPU resources is limitted or you want to use GPU for other computaion. You can set
-use_gpu: 1
-use_gpu_acc_flow: 0
-If your other application do not require much GPU resources, I recommanded you to set
-use_gpu: 1
-use_gpu_acc_flow: 1
-According to my test, on TX2 if you set this two parameters to 1 at the same time, the GPU usage is about 20%.
+# About
+This is fork from [VINS-Fusion-gpu](https://github.com/pjrambo/VINS-Fusion-gpu) for opencv 4 version (and some fix).
+VINS-Fusion-gpu is a version of [VINS-Fusion](https://github.com/HKUST-Aerial-Robotics/VINS-Fusion) with GPU acceleration.
-# VINS-Fusion
-## An optimization-based multi-sensor state estimator
-
-
+# OpenCV bridge
+This is require ROS and OpenCV bridge for OpenCV 4.
-VINS-Fusion is an optimization-based multi-sensor state estimator, which achieves accurate self-localization for autonomous applications (drones, cars, and AR/VR). VINS-Fusion is an extension of [VINS-Mono](https://github.com/HKUST-Aerial-Robotics/VINS-Mono), which supports multiple visual-inertial sensor types (mono camera + IMU, stereo cameras + IMU, even stereo cameras only). We also show a toy example of fusing VINS with GPS.
-**Features:**
-- multiple sensors support (stereo cameras / mono camera+IMU / stereo cameras+IMU)
-- online spatial calibration (transformation between camera and IMU)
-- online temporal calibration (time offset between camera and IMU)
-- visual loop closure
-
-
-
-We are the **top** open-sourced stereo algorithm on [KITTI Odometry Benchmark](http://www.cvlibs.net/datasets/kitti/eval_odometry.php) (12.Jan.2019).
-
-**Authors:** [Tong Qin](http://www.qintonguav.com), Shaozu Cao, Jie Pan, [Peiliang Li](https://peiliangli.github.io/), and [Shaojie Shen](http://www.ece.ust.hk/ece.php/profile/facultydetail/eeshaojie) from the [Aerial Robotics Group](http://uav.ust.hk/), [HKUST](https://www.ust.hk/)
-
-**Videos:**
-
-
-
-
-**Related Papers:** (papers are not exactly same with code)
-* **A General Optimization-based Framework for Local Odometry Estimation with Multiple Sensors**, Tong Qin, Jie Pan, Shaozu Cao, Shaojie Shen, aiXiv [pdf](https://arxiv.org/abs/1901.03638)
-
-* **A General Optimization-based Framework for Global Pose Estimation with Multiple Sensors**, Tong Qin, Shaozu Cao, Jie Pan, Shaojie Shen, aiXiv [pdf](https://arxiv.org/abs/1901.03642)
-
-* **Online Temporal Calibration for Monocular Visual-Inertial Systems**, Tong Qin, Shaojie Shen, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS, 2018), **best student paper award** [pdf](https://ieeexplore.ieee.org/abstract/document/8593603)
-
-* **VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator**, Tong Qin, Peiliang Li, Shaojie Shen, IEEE Transactions on Robotics [pdf](https://ieeexplore.ieee.org/document/8421746/?arnumber=8421746&source=authoralert)
+```
+cd ~/catkin_ws/src
+git clone https://github.com/ros-perception/vision_opencv
+```
+In CMakeLists.txt change python version (if necessary):
+```
+nano vision_opencv/cv_bridge/CMakeLists.txt
+```
+```
+find_package(Boost REQUIRED python37) -> find_package(Boost REQUIRED python3)
+```
-*If you use VINS-Fusion for your academic research, please cite our related papers. [bib](https://github.com/HKUST-Aerial-Robotics/VINS-Fusion/blob/master/support_files/paper_bib.txt)*
+In module.hpp add define NUMPY_IMPORT_ARRAY_RETVAL:
+```
+nano vision_opencv/cv_bridge/src/module.hpp
+```
+```cpp
+#include
+#define NUMPY_IMPORT_ARRAY_RETVAL NULL
+```
-## 1. Prerequisites
-### 1.1 **Ubuntu** and **ROS**
-Ubuntu 64-bit 16.04 or 18.04.
-ROS Kinetic or Melodic. [ROS Installation](http://wiki.ros.org/ROS/Installation)
+Build OpenCV bridge:
+```
+cd ~/catkin_ws
+catkin_make
+```
-### 1.2. **Ceres Solver**
-Follow [Ceres Installation](http://ceres-solver.org/installation.html).
+# Build
+Build is same as VINS-Fusion-gpu and VINS-Fusion.
+```
+sudo apt-get install ros-melodic-tf ros-melodic-image-transport
+```
-## 2. Build VINS-Fusion
Clone the repository and catkin_make:
```
cd ~/catkin_ws/src
- git clone https://github.com/HKUST-Aerial-Robotics/VINS-Fusion.git
+ git clone https://github.com/IOdissey/VINS-Fusion-GPU.git
cd ../
catkin_make
source ~/catkin_ws/devel/setup.bash
```
-(if you fail in this step, try to find another computer with clean system or reinstall Ubuntu and ROS)
-## 3. EuRoC Example
-Download [EuRoC MAV Dataset](http://projects.asl.ethz.ch/datasets/doku.php?id=kmavvisualinertialdatasets) to YOUR_DATASET_FOLDER. Take MH_01 for example, you can run VINS-Fusion with three sensor types (monocular camera + IMU, stereo cameras + IMU and stereo cameras).
-Open four terminals, run vins odometry, visual loop closure(optional), rviz and play the bag file respectively.
-Green path is VIO odometry; red path is odometry under visual loop closure.
-### 3.1 Monocualr camera + IMU
+# Usage
+In the config file, there are two parameters for gpu acceleration.
+GPU for GoodFeaturesToTrack (0 - off, 1 - on)
```
- roslaunch vins vins_rviz.launch
- rosrun vins vins_node ~/catkin_ws/src/VINS-Fusion/config/euroc/euroc_mono_imu_config.yaml
- (optional) rosrun loop_fusion loop_fusion_node ~/catkin_ws/src/VINS-Fusion/config/euroc/euroc_mono_imu_config.yaml
- rosbag play YOUR_DATASET_FOLDER/MH_01_easy.bag
+use_gpu: 1
```
-### 3.2 Stereo cameras + IMU
-
+GPU for OpticalFlowPyrLK (0 - off, 1 - on)
```
- roslaunch vins vins_rviz.launch
- rosrun vins vins_node ~/catkin_ws/src/VINS-Fusion/config/euroc/euroc_stereo_imu_config.yaml
- (optional) rosrun loop_fusion loop_fusion_node ~/catkin_ws/src/VINS-Fusion/config/euroc/euroc_stereo_imu_config.yaml
- rosbag play YOUR_DATASET_FOLDER/MH_01_easy.bag
+use_gpu_acc_flow: 1
```
-### 3.3 Stereo cameras
-
+If your GPU resources is limitted or you want to use GPU for other computaion. You can set
```
- roslaunch vins vins_rviz.launch
- rosrun vins vins_node ~/catkin_ws/src/VINS-Fusion/config/euroc/euroc_stereo_config.yaml
- (optional) rosrun loop_fusion loop_fusion_node ~/catkin_ws/src/VINS-Fusion/config/euroc/euroc_stereo_config.yaml
- rosbag play YOUR_DATASET_FOLDER/MH_01_easy.bag
+use_gpu: 1
+use_gpu_acc_flow: 0
```
-
-
-
-## 4. KITTI Example
-### 4.1 KITTI Odometry (Stereo)
-Download [KITTI Odometry dataset](http://www.cvlibs.net/datasets/kitti/eval_odometry.php) to YOUR_DATASET_FOLDER. Take sequences 00 for example,
-Open two terminals, run vins and rviz respectively.
-(We evaluated odometry on KITTI benchmark without loop closure funtion)
-```
- roslaunch vins vins_rviz.launch
- (optional) rosrun loop_fusion loop_fusion_node ~/catkin_ws/src/VINS-Fusion/config/kitti_odom/kitti_config00-02.yaml
- rosrun vins kitti_odom_test ~/catkin_ws/src/VINS-Fusion/config/kitti_odom/kitti_config00-02.yaml YOUR_DATASET_FOLDER/sequences/00/
+If your other application do not require much GPU resources (recommanded)
```
-### 4.2 KITTI GPS Fusion (Stereo + GPS)
-Download [KITTI raw dataset](http://www.cvlibs.net/datasets/kitti/raw_data.php) to YOUR_DATASET_FOLDER. Take [2011_10_03_drive_0027_synced](https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_10_03_drive_0027/2011_10_03_drive_0027_sync.zip) for example.
-Open three terminals, run vins, global fusion and rviz respectively.
-Green path is VIO odometry; blue path is odometry under GPS global fusion.
+use_gpu: 1
+use_gpu_acc_flow: 1
```
- roslaunch vins vins_rviz.launch
- rosrun vins kitti_gps_test ~/catkin_ws/src/VINS-Fusion/config/kitti_raw/kitti_10_03_config.yaml YOUR_DATASET_FOLDER/2011_10_03_drive_0027_sync/
- rosrun global_fusion global_fusion_node
-```
-
-
-## 5. VINS-Fusion on car demonstration
-Download [car bag](https://drive.google.com/open?id=10t9H1u8pMGDOI6Q2w2uezEq5Ib-Z8tLz) to YOUR_DATASET_FOLDER.
-Open four terminals, run vins odometry, visual loop closure(optional), rviz and play the bag file respectively.
-Green path is VIO odometry; red path is odometry under visual loop closure.
-```
- roslaunch vins vins_rviz.launch
- rosrun vins vins_node ~/catkin_ws/src/VINS-Fusion/config/vi_car/vi_car.yaml
- (optional) rosrun loop_fusion loop_fusion_node ~/catkin_ws/src/VINS-Fusion/config/vi_car/vi_car.yaml
- rosbag play YOUR_DATASET_FOLDER/car.bag
+There are two parameters for config OpticalFlowPyrLK (pyramid level and window size):
```
-
-
-
-
-## 6. Run with your devices
-VIO is not only a software algorithm, it heavily relies on hardware quality. For beginners, we recommend you to run VIO with professional equipment, which contains global shutter cameras and hardware synchronization.
-
-### 6.1 Configuration file
-Write a config file for your device. You can take config files of EuRoC and KITTI as the example.
-
-### 6.2 Camera calibration
-VINS-Fusion support several camera models (pinhole, mei, equidistant). You can use [camera model](https://github.com/hengli/camodocal) to calibrate your cameras. We put some example data under /camera_models/calibrationdata to tell you how to calibrate.
-```
-cd ~/catkin_ws/src/VINS-Fusion/camera_models/camera_calib_example/
-rosrun camera_models Calibrations -w 12 -h 8 -s 80 -i calibrationdata --camera-model pinhole
-```
-
-
-## 7. Acknowledgements
-We use [ceres solver](http://ceres-solver.org/) for non-linear optimization and [DBoW2](https://github.com/dorian3d/DBoW2) for loop detection, a generic [camera model](https://github.com/hengli/camodocal) and [GeographicLib](https://geographiclib.sourceforge.io/).
-
-## 8. License
-The source code is released under [GPLv3](http://www.gnu.org/licenses/) license.
-
-We are still working on improving the code reliability. For any technical issues, please contact Tong Qin .
-
-For commercial inquiries, please contact Shaojie Shen .
+lk_n = 3
+lk_size = 21
+```
\ No newline at end of file
diff --git a/camera_models/src/calib/CameraCalibration.cc b/camera_models/src/calib/CameraCalibration.cc
index 10543e9..682fe07 100644
--- a/camera_models/src/calib/CameraCalibration.cc
+++ b/camera_models/src/calib/CameraCalibration.cc
@@ -232,7 +232,7 @@ CameraCalibration::drawResults(std::vector& images) const
cv::Mat& image = images.at(i);
if (image.channels() == 1)
{
- cv::cvtColor(image, image, CV_GRAY2RGB);
+ cv::cvtColor(image, image, cv::COLOR_GRAY2RGB);
}
std::vector estImagePoints;
@@ -250,12 +250,12 @@ CameraCalibration::drawResults(std::vector& images) const
cv::circle(image,
cv::Point(cvRound(pObs.x * drawMultiplier),
cvRound(pObs.y * drawMultiplier)),
- 5, green, 2, CV_AA, drawShiftBits);
+ 5, green, 2, cv::LINE_AA, drawShiftBits);
cv::circle(image,
cv::Point(cvRound(pEst.x * drawMultiplier),
cvRound(pEst.y * drawMultiplier)),
- 5, red, 2, CV_AA, drawShiftBits);
+ 5, red, 2, cv::LINE_AA, drawShiftBits);
float error = cv::norm(pObs - pEst);
@@ -272,7 +272,7 @@ CameraCalibration::drawResults(std::vector& images) const
cv::putText(image, oss.str(), cv::Point(10, image.rows - 10),
cv::FONT_HERSHEY_COMPLEX, 0.5, cv::Scalar(255, 255, 255),
- 1, CV_AA);
+ 1, cv::LINE_AA);
}
}
diff --git a/camera_models/src/chessboard/Chessboard.cc b/camera_models/src/chessboard/Chessboard.cc
index 4c0a761..03542e9 100644
--- a/camera_models/src/chessboard/Chessboard.cc
+++ b/camera_models/src/chessboard/Chessboard.cc
@@ -17,13 +17,13 @@ Chessboard::Chessboard(cv::Size boardSize, cv::Mat& image)
{
if (image.channels() == 1)
{
- cv::cvtColor(image, mSketch, CV_GRAY2BGR);
+ cv::cvtColor(image, mSketch, cv::COLOR_GRAY2BGR);
image.copyTo(mImage);
}
else
{
image.copyTo(mSketch);
- cv::cvtColor(image, mImage, CV_BGR2GRAY);
+ cv::cvtColor(image, mImage, cv::COLOR_BGR2GRAY);
}
}
@@ -31,10 +31,10 @@ void
Chessboard::findCorners(bool useOpenCV)
{
mCornersFound = findChessboardCorners(mImage, mBoardSize, mCorners,
- CV_CALIB_CB_ADAPTIVE_THRESH +
- CV_CALIB_CB_NORMALIZE_IMAGE +
- CV_CALIB_CB_FILTER_QUADS +
- CV_CALIB_CB_FAST_CHECK,
+ cv::CALIB_CB_ADAPTIVE_THRESH +
+ cv::CALIB_CB_NORMALIZE_IMAGE +
+ cv::CALIB_CB_FILTER_QUADS +
+ cv::CALIB_CB_FAST_CHECK,
useOpenCV);
if (mCornersFound)
@@ -141,24 +141,24 @@ Chessboard::findChessboardCornersImproved(const cv::Mat& image,
// Image histogram normalization and
// BGR to Grayscale image conversion (if applicable)
// MARTIN: Set to "false"
- if (image.channels() != 1 || (flags & CV_CALIB_CB_NORMALIZE_IMAGE))
+ if (image.channels() != 1 || (flags & cv::CALIB_CB_NORMALIZE_IMAGE))
{
cv::Mat norm_img(image.rows, image.cols, CV_8UC1);
if (image.channels() != 1)
{
- cv::cvtColor(image, norm_img, CV_BGR2GRAY);
+ cv::cvtColor(image, norm_img, cv::COLOR_BGR2GRAY);
img = norm_img;
}
- if (flags & CV_CALIB_CB_NORMALIZE_IMAGE)
+ if (flags & cv::CALIB_CB_NORMALIZE_IMAGE)
{
cv::equalizeHist(image, norm_img);
img = norm_img;
}
}
- if (flags & CV_CALIB_CB_FAST_CHECK)
+ if (flags & cv::CALIB_CB_FAST_CHECK)
{
if (!checkChessboard(img, patternSize))
{
@@ -189,13 +189,13 @@ Chessboard::findChessboardCornersImproved(const cv::Mat& image,
cv::Mat thresh_img;
// convert the input grayscale image to binary (black-n-white)
- if (flags & CV_CALIB_CB_ADAPTIVE_THRESH)
+ if (flags & cv::CALIB_CB_ADAPTIVE_THRESH)
{
int blockSize = lround(prevSqrSize == 0 ?
std::min(img.cols,img.rows)*(k%2 == 0 ? 0.2 : 0.1): prevSqrSize*2)|1;
// convert to binary
- cv::adaptiveThreshold(img, thresh_img, 255, CV_ADAPTIVE_THRESH_MEAN_C, CV_THRESH_BINARY, blockSize, (k/2)*5);
+ cv::adaptiveThreshold(img, thresh_img, 255, cv::ADAPTIVE_THRESH_MEAN_C, cv::THRESH_BINARY, blockSize, (k/2)*5);
}
else
{
@@ -204,7 +204,7 @@ Chessboard::findChessboardCornersImproved(const cv::Mat& image,
int thresh_level = lround(mean - 10);
thresh_level = std::max(thresh_level, 10);
- cv::threshold(img, thresh_img, thresh_level, 255, CV_THRESH_BINARY);
+ cv::threshold(img, thresh_img, thresh_level, 255, cv::THRESH_BINARY);
}
// MARTIN's Code
@@ -212,8 +212,8 @@ Chessboard::findChessboardCornersImproved(const cv::Mat& image,
// homogeneous dilation is performed, which is crucial for small,
// distorted checkers. Use the CROSS kernel first, since its action
// on the image is more subtle
- cv::Mat kernel1 = cv::getStructuringElement(CV_SHAPE_CROSS, cv::Size(3,3), cv::Point(1,1));
- cv::Mat kernel2 = cv::getStructuringElement(CV_SHAPE_RECT, cv::Size(3,3), cv::Point(1,1));
+ cv::Mat kernel1 = cv::getStructuringElement(cv::MORPH_CROSS, cv::Size(3,3), cv::Point(1,1));
+ cv::Mat kernel2 = cv::getStructuringElement(cv::MORPH_RECT, cv::Size(3,3), cv::Point(1,1));
if (dilations >= 1)
cv::dilate(thresh_img, thresh_img, kernel1);
@@ -317,7 +317,7 @@ Chessboard::findChessboardCornersImproved(const cv::Mat& image,
}
cv::cornerSubPix(image, corners, cv::Size(11, 11), cv::Size(-1,-1),
- cv::TermCriteria(CV_TERMCRIT_EPS + CV_TERMCRIT_ITER, 30, 0.1));
+ cv::TermCriteria(cv::TermCriteria::Type::EPS + cv::TermCriteria::Type::MAX_ITER, 30, 0.1));
return true;
}
@@ -1172,7 +1172,7 @@ Chessboard::generateQuads(std::vector& quads,
std::vector hierarchy;
// Initialize contour retrieving routine
- cv::findContours(image, contours, hierarchy, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE);
+ cv::findContours(image, contours, hierarchy, cv::RETR_CCOMP, cv::CHAIN_APPROX_SIMPLE);
std::vector< std::vector > quadContours;
@@ -1238,7 +1238,7 @@ Chessboard::generateQuads(std::vector& quads,
dp = pt[1] - pt[2];
double d4 = sqrt(dp.dot(dp));
- if (!(flags & CV_CALIB_CB_FILTER_QUADS) ||
+ if (!(flags & cv::CALIB_CB_FILTER_QUADS) ||
(d3*4 > d4 && d4*4 > d3 && d3*d4 < area*1.5 && area > minSize &&
d1 >= 0.15 * p && d2 >= 0.15 * p))
{
@@ -1583,20 +1583,20 @@ Chessboard::checkChessboard(const cv::Mat& image, cv::Size patternSize) const
bool result = false;
for (float threshLevel = blackLevel; threshLevel < whiteLevel && !result; threshLevel += 20.0f)
{
- cv::threshold(white, thresh, threshLevel + blackWhiteGap, 255, CV_THRESH_BINARY);
+ cv::threshold(white, thresh, threshLevel + blackWhiteGap, 255, cv::THRESH_BINARY);
std::vector< std::vector > contours;
std::vector hierarchy;
// Initialize contour retrieving routine
- cv::findContours(thresh, contours, hierarchy, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE);
+ cv::findContours(thresh, contours, hierarchy, cv::RETR_CCOMP, cv::CHAIN_APPROX_SIMPLE);
std::vector > quads;
getQuadrangleHypotheses(contours, quads, 1);
- cv::threshold(black, thresh, threshLevel, 255, CV_THRESH_BINARY_INV);
+ cv::threshold(black, thresh, threshLevel, 255, cv::THRESH_BINARY_INV);
- cv::findContours(thresh, contours, hierarchy, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE);
+ cv::findContours(thresh, contours, hierarchy, cv::RETR_CCOMP, cv::CHAIN_APPROX_SIMPLE);
getQuadrangleHypotheses(contours, quads, 0);
const size_t min_quads_count = patternSize.width * patternSize.height / 2;
diff --git a/camera_models/src/intrinsic_calib.cc b/camera_models/src/intrinsic_calib.cc
index 7590a64..2742d3d 100644
--- a/camera_models/src/intrinsic_calib.cc
+++ b/camera_models/src/intrinsic_calib.cc
@@ -274,7 +274,7 @@ main( int argc, char** argv )
0.5,
cv::Scalar( 255, 255, 255 ),
1,
- CV_AA );
+ cv::LINE_AA );
cv::imshow( "Image", cbImages.at( i ) );
cv::waitKey( 0 );
}
diff --git a/loop_fusion/CMakeLists.txt b/loop_fusion/CMakeLists.txt
index a4137fe..4abf913 100644
--- a/loop_fusion/CMakeLists.txt
+++ b/loop_fusion/CMakeLists.txt
@@ -15,8 +15,8 @@ find_package(catkin REQUIRED COMPONENTS
roslib
)
-#find_package(OpenCV)
-include(/home/dji/opencv/build/OpenCVConfig.cmake)
+find_package(OpenCV REQUIRED)
+#include(/home/dji/opencv/build/OpenCVConfig.cmake)
find_package(Ceres REQUIRED)
diff --git a/loop_fusion/src/ThirdParty/DVision/BRIEF.cpp b/loop_fusion/src/ThirdParty/DVision/BRIEF.cpp
index 6af73c4..49940d1 100644
--- a/loop_fusion/src/ThirdParty/DVision/BRIEF.cpp
+++ b/loop_fusion/src/ThirdParty/DVision/BRIEF.cpp
@@ -50,7 +50,7 @@ void BRIEF::compute(const cv::Mat &image,
cv::Mat aux;
if(image.depth() == 3)
{
- cv::cvtColor(image, aux, CV_RGB2GRAY);
+ cv::cvtColor(image, aux, cv::COLOR_RGB2GRAY);
}
else
{
diff --git a/loop_fusion/src/keyframe.cpp b/loop_fusion/src/keyframe.cpp
index 862b795..568a84e 100644
--- a/loop_fusion/src/keyframe.cpp
+++ b/loop_fusion/src/keyframe.cpp
@@ -433,7 +433,7 @@ bool KeyFrame::findConnection(KeyFrame* old_kf)
cv::Mat old_img = old_kf->image;
cv::hconcat(image, gap_image, gap_image);
cv::hconcat(gap_image, old_img, gray_img);
- cvtColor(gray_img, loop_match_img, CV_GRAY2RGB);
+ cvtColor(gray_img, loop_match_img, cv::COLOR_GRAY2RGB);
for(int i = 0; i< (int)matched_2d_cur.size(); i++)
{
cv::Point2f cur_pt = matched_2d_cur[i];
@@ -452,9 +452,9 @@ bool KeyFrame::findConnection(KeyFrame* old_kf)
cv::line(loop_match_img, matched_2d_cur[i], old_pt, cv::Scalar(0, 255, 0), 2, 8, 0);
}
cv::Mat notation(50, COL + gap + COL, CV_8UC3, cv::Scalar(255, 255, 255));
- putText(notation, "current frame: " + to_string(index) + " sequence: " + to_string(sequence), cv::Point2f(20, 30), CV_FONT_HERSHEY_SIMPLEX, 1, cv::Scalar(255), 3);
+ putText(notation, "current frame: " + to_string(index) + " sequence: " + to_string(sequence), cv::Point2f(20, 30), cv::FONT_HERSHEY_SIMPLEX, 1, cv::Scalar(255), 3);
- putText(notation, "previous frame: " + to_string(old_kf->index) + " sequence: " + to_string(old_kf->sequence), cv::Point2f(20 + COL + gap, 30), CV_FONT_HERSHEY_SIMPLEX, 1, cv::Scalar(255), 3);
+ putText(notation, "previous frame: " + to_string(old_kf->index) + " sequence: " + to_string(old_kf->sequence), cv::Point2f(20 + COL + gap, 30), cv::FONT_HERSHEY_SIMPLEX, 1, cv::Scalar(255), 3);
cv::vconcat(notation, loop_match_img, loop_match_img);
/*
diff --git a/loop_fusion/src/pose_graph.cpp b/loop_fusion/src/pose_graph.cpp
index b5436fd..63be1de 100644
--- a/loop_fusion/src/pose_graph.cpp
+++ b/loop_fusion/src/pose_graph.cpp
@@ -340,7 +340,7 @@ int PoseGraph::detectLoop(KeyFrame* keyframe, int frame_index)
{
int feature_num = keyframe->keypoints.size();
cv::resize(keyframe->image, compressed_image, cv::Size(376, 240));
- putText(compressed_image, "feature_num:" + to_string(feature_num), cv::Point2f(10, 10), CV_FONT_HERSHEY_SIMPLEX, 0.4, cv::Scalar(255));
+ putText(compressed_image, "feature_num:" + to_string(feature_num), cv::Point2f(10, 10), cv::FONT_HERSHEY_SIMPLEX, 0.4, cv::Scalar(255));
image_pool[frame_index] = compressed_image;
}
TicToc tmp_t;
@@ -361,7 +361,7 @@ int PoseGraph::detectLoop(KeyFrame* keyframe, int frame_index)
{
loop_result = compressed_image.clone();
if (ret.size() > 0)
- putText(loop_result, "neighbour score:" + to_string(ret[0].Score), cv::Point2f(10, 50), CV_FONT_HERSHEY_SIMPLEX, 0.5, cv::Scalar(255));
+ putText(loop_result, "neighbour score:" + to_string(ret[0].Score), cv::Point2f(10, 50), cv::FONT_HERSHEY_SIMPLEX, 0.5, cv::Scalar(255));
}
// visual loop result
if (DEBUG_IMAGE)
@@ -371,7 +371,7 @@ int PoseGraph::detectLoop(KeyFrame* keyframe, int frame_index)
int tmp_index = ret[i].Id;
auto it = image_pool.find(tmp_index);
cv::Mat tmp_image = (it->second).clone();
- putText(tmp_image, "index: " + to_string(tmp_index) + "loop score:" + to_string(ret[i].Score), cv::Point2f(10, 50), CV_FONT_HERSHEY_SIMPLEX, 0.5, cv::Scalar(255));
+ putText(tmp_image, "index: " + to_string(tmp_index) + "loop score:" + to_string(ret[i].Score), cv::Point2f(10, 50), cv::FONT_HERSHEY_SIMPLEX, 0.5, cv::Scalar(255));
cv::hconcat(loop_result, tmp_image, loop_result);
}
}
@@ -388,7 +388,7 @@ int PoseGraph::detectLoop(KeyFrame* keyframe, int frame_index)
{
auto it = image_pool.find(tmp_index);
cv::Mat tmp_image = (it->second).clone();
- putText(tmp_image, "loop score:" + to_string(ret[i].Score), cv::Point2f(10, 50), CV_FONT_HERSHEY_SIMPLEX, 0.4, cv::Scalar(255));
+ putText(tmp_image, "loop score:" + to_string(ret[i].Score), cv::Point2f(10, 50), cv::FONT_HERSHEY_SIMPLEX, 0.4, cv::Scalar(255));
cv::hconcat(loop_result, tmp_image, loop_result);
}
}
@@ -424,7 +424,7 @@ void PoseGraph::addKeyFrameIntoVoc(KeyFrame* keyframe)
{
int feature_num = keyframe->keypoints.size();
cv::resize(keyframe->image, compressed_image, cv::Size(376, 240));
- putText(compressed_image, "feature_num:" + to_string(feature_num), cv::Point2f(10, 10), CV_FONT_HERSHEY_SIMPLEX, 0.4, cv::Scalar(255));
+ putText(compressed_image, "feature_num:" + to_string(feature_num), cv::Point2f(10, 10), cv::FONT_HERSHEY_SIMPLEX, 0.4, cv::Scalar(255));
image_pool[keyframe->index] = compressed_image;
}
diff --git a/vins_estimator/CMakeLists.txt b/vins_estimator/CMakeLists.txt
index 165a4c2..4d23a95 100644
--- a/vins_estimator/CMakeLists.txt
+++ b/vins_estimator/CMakeLists.txt
@@ -16,8 +16,8 @@ find_package(catkin REQUIRED COMPONENTS
camera_models
image_transport)
-#find_package(OpenCV REQUIRED)
-include(/home/dji/opencv/build/OpenCVConfig.cmake)
+find_package(OpenCV REQUIRED)
+#include(/home/dji/opencv/build/OpenCVConfig.cmake)
#message(WARNING "OpenCV_VERSION: ${OpenCV_VERSION}")
find_package(Ceres REQUIRED)
diff --git a/vins_estimator/src/KITTIGPSTest.cpp b/vins_estimator/src/KITTIGPSTest.cpp
index 8c75576..da82216 100644
--- a/vins_estimator/src/KITTIGPSTest.cpp
+++ b/vins_estimator/src/KITTIGPSTest.cpp
@@ -119,8 +119,8 @@ int main(int argc, char** argv)
printf("%s\n", leftImagePath.c_str() );
printf("%s\n", rightImagePath.c_str() );
- imLeft = cv::imread(leftImagePath, CV_LOAD_IMAGE_GRAYSCALE );
- imRight = cv::imread(rightImagePath, CV_LOAD_IMAGE_GRAYSCALE );
+ imLeft = cv::imread(leftImagePath, cv::IMREAD_GRAYSCALE );
+ imRight = cv::imread(rightImagePath, cv::IMREAD_GRAYSCALE );
double imgTime = imageTimeList[i] - baseTime;
diff --git a/vins_estimator/src/KITTIOdomTest.cpp b/vins_estimator/src/KITTIOdomTest.cpp
index df4c338..073a2f3 100644
--- a/vins_estimator/src/KITTIOdomTest.cpp
+++ b/vins_estimator/src/KITTIOdomTest.cpp
@@ -92,12 +92,12 @@ int main(int argc, char** argv)
//printf("%s\n", leftImagePath.c_str() );
//printf("%s\n", rightImagePath.c_str() );
- imLeft = cv::imread(leftImagePath, CV_LOAD_IMAGE_GRAYSCALE );
+ imLeft = cv::imread(leftImagePath, cv::IMREAD_GRAYSCALE );
sensor_msgs::ImagePtr imLeftMsg = cv_bridge::CvImage(std_msgs::Header(), "mono8", imLeft).toImageMsg();
imLeftMsg->header.stamp = ros::Time(imageTimeList[i]);
pubLeftImage.publish(imLeftMsg);
- imRight = cv::imread(rightImagePath, CV_LOAD_IMAGE_GRAYSCALE );
+ imRight = cv::imread(rightImagePath, cv::IMREAD_GRAYSCALE );
sensor_msgs::ImagePtr imRightMsg = cv_bridge::CvImage(std_msgs::Header(), "mono8", imRight).toImageMsg();
imRightMsg->header.stamp = ros::Time(imageTimeList[i]);
pubRightImage.publish(imRightMsg);
diff --git a/vins_estimator/src/estimator/estimator.cpp b/vins_estimator/src/estimator/estimator.cpp
index 0dcaa38..cdf925a 100644
--- a/vins_estimator/src/estimator/estimator.cpp
+++ b/vins_estimator/src/estimator/estimator.cpp
@@ -55,10 +55,14 @@ void Estimator::inputImage(double t, const cv::Mat &_img, const cv::Mat &_img1)
inputImageCnt++;
map>>> featureFrame;
// TicToc featureTrackerTime;
+
if(_img1.empty())
featureFrame = featureTracker.trackImage(t, _img);
else
featureFrame = featureTracker.trackImage(t, _img, _img1);
+
+ // printf("featureTracker time: %fms\n", featureTrackerTime.toc());
+
// if(begin_time_count--<=0)
// {
// sum_t_feature += featureTrackerTime.toc();
diff --git a/vins_estimator/src/estimator/parameters.cpp b/vins_estimator/src/estimator/parameters.cpp
index 46b09f1..fd49b42 100644
--- a/vins_estimator/src/estimator/parameters.cpp
+++ b/vins_estimator/src/estimator/parameters.cpp
@@ -50,6 +50,8 @@ int MIN_DIST;
double F_THRESHOLD;
int SHOW_TRACK;
int FLOW_BACK;
+int LK_SIZE;
+int LK_N;
template
@@ -91,6 +93,14 @@ void readParameters(std::string config_file)
F_THRESHOLD = fsSettings["F_threshold"];
SHOW_TRACK = fsSettings["show_track"];
FLOW_BACK = fsSettings["flow_back"];
+ if (fsSettings["lk_size"].empty())
+ LK_SIZE = 21;
+ else
+ LK_SIZE = fsSettings["lk_size"];
+ if (fsSettings["lk_n"].empty())
+ LK_N = 3;
+ else
+ LK_N = fsSettings["lk_n"];
MULTIPLE_THREAD = fsSettings["multiple_thread"];
diff --git a/vins_estimator/src/estimator/parameters.h b/vins_estimator/src/estimator/parameters.h
index e19c544..537ab86 100644
--- a/vins_estimator/src/estimator/parameters.h
+++ b/vins_estimator/src/estimator/parameters.h
@@ -68,6 +68,8 @@ extern int MIN_DIST;
extern double F_THRESHOLD;
extern int SHOW_TRACK;
extern int FLOW_BACK;
+extern int LK_SIZE;
+extern int LK_N;
void readParameters(std::string config_file);
diff --git a/vins_estimator/src/factor/projection_factor.h b/vins_estimator/src/factor/projection_factor.h
index 952cecc..c4cb134 100644
--- a/vins_estimator/src/factor/projection_factor.h
+++ b/vins_estimator/src/factor/projection_factor.h
@@ -14,7 +14,7 @@
#include
#include "../utility/utility.h"
#include "../utility/tic_toc.h"
-#include "../estimator/parameters.h"
+// #include "../estimator/parameters.h"
class ProjectionFactor : public ceres::SizedCostFunction<2, 7, 7, 7, 1>
{
diff --git a/vins_estimator/src/featureTracker/feature_tracker.cpp b/vins_estimator/src/featureTracker/feature_tracker.cpp
index 51a7402..0158906 100644
--- a/vins_estimator/src/featureTracker/feature_tracker.cpp
+++ b/vins_estimator/src/featureTracker/feature_tracker.cpp
@@ -19,12 +19,19 @@ bool FeatureTracker::inBorder(const cv::Point2f &pt)
return BORDER_SIZE <= img_x && img_x < col - BORDER_SIZE && BORDER_SIZE <= img_y && img_y < row - BORDER_SIZE;
}
-double distance(cv::Point2f pt1, cv::Point2f pt2)
+// double distance(cv::Point2f pt1, cv::Point2f pt2)
+// {
+// //printf("pt1: %f %f pt2: %f %f\n", pt1.x, pt1.y, pt2.x, pt2.y);
+// double dx = pt1.x - pt2.x;
+// double dy = pt1.y - pt2.y;
+// return sqrt(dx * dx + dy * dy);
+// }
+
+double distance_2(const cv::Point2f& pt1, const cv::Point2f& pt2)
{
- //printf("pt1: %f %f pt2: %f %f\n", pt1.x, pt1.y, pt2.x, pt2.y);
double dx = pt1.x - pt2.x;
double dy = pt1.y - pt2.y;
- return sqrt(dx * dx + dy * dy);
+ return dx * dx + dy * dy;
}
void reduceVector(vector &v, vector status)
@@ -104,36 +111,37 @@ double FeatureTracker::distance(cv::Point2f &pt1, cv::Point2f &pt2)
map>>> FeatureTracker::trackImage(double _cur_time, const cv::Mat &_img, const cv::Mat &_img1)
{
- TicToc t_r;
+ // TicToc t_r;
+
cur_time = _cur_time;
cur_img = _img;
row = cur_img.rows;
col = cur_img.cols;
cv::Mat rightImg = _img1;
- /*
+ cur_pts.clear();
+
+ cv::cuda::GpuMat cur_gpu_img;
+ cv::cuda::GpuMat right_gpu_Img;
+ if (USE_GPU_ACC_FLOW)
{
- cv::Ptr clahe = cv::createCLAHE(3.0, cv::Size(8, 8));
- clahe->apply(cur_img, cur_img);
- if(!rightImg.empty())
- clahe->apply(rightImg, rightImg);
+ cur_gpu_img = cv::cuda::GpuMat(cur_img);
+ right_gpu_Img = cv::cuda::GpuMat(rightImg);
+ if (d_pyrLK_sparse.empty())
+ d_pyrLK_sparse = cv::cuda::SparsePyrLKOpticalFlow::create(cv::Size(LK_SIZE, LK_SIZE), LK_N, 30, false);
+ if (d_pyrLK_sparse_prediction.empty())
+ d_pyrLK_sparse_prediction = cv::cuda::SparsePyrLKOpticalFlow::create(cv::Size(LK_SIZE, LK_SIZE), 1, 30, true);
}
- */
- cur_pts.clear();
if (prev_pts.size() > 0)
{
vector status;
if(!USE_GPU_ACC_FLOW)
{
- TicToc t_o;
-
vector err;
if(hasPrediction)
{
cur_pts = predict_pts;
- cv::calcOpticalFlowPyrLK(prev_img, cur_img, prev_pts, cur_pts, status, err, cv::Size(21, 21), 1,
- cv::TermCriteria(cv::TermCriteria::COUNT+cv::TermCriteria::EPS, 30, 0.01), cv::OPTFLOW_USE_INITIAL_FLOW);
-
+ cv::calcOpticalFlowPyrLK(prev_img, cur_img, prev_pts, cur_pts, status, err, cv::Size(LK_SIZE, LK_SIZE), 1, cv::TermCriteria(cv::TermCriteria::COUNT+cv::TermCriteria::EPS, 30, 0.01), cv::OPTFLOW_USE_INITIAL_FLOW);
int succ_num = 0;
for (size_t i = 0; i < status.size(); i++)
{
@@ -141,24 +149,21 @@ map>>> FeatureTracker::trackIm
succ_num++;
}
if (succ_num < 10)
- cv::calcOpticalFlowPyrLK(prev_img, cur_img, prev_pts, cur_pts, status, err, cv::Size(21, 21), 3);
+ cv::calcOpticalFlowPyrLK(prev_img, cur_img, prev_pts, cur_pts, status, err, cv::Size(LK_SIZE, LK_SIZE), LK_N);
}
else
- cv::calcOpticalFlowPyrLK(prev_img, cur_img, prev_pts, cur_pts, status, err, cv::Size(21, 21), 3);
+ cv::calcOpticalFlowPyrLK(prev_img, cur_img, prev_pts, cur_pts, status, err, cv::Size(LK_SIZE, LK_SIZE), LK_N);
// reverse check
if(FLOW_BACK)
{
vector reverse_status;
vector reverse_pts = prev_pts;
- cv::calcOpticalFlowPyrLK(cur_img, prev_img, cur_pts, reverse_pts, reverse_status, err, cv::Size(21, 21), 1,
- cv::TermCriteria(cv::TermCriteria::COUNT+cv::TermCriteria::EPS, 30, 0.01), cv::OPTFLOW_USE_INITIAL_FLOW);
- //cv::calcOpticalFlowPyrLK(cur_img, prev_img, cur_pts, reverse_pts, reverse_status, err, cv::Size(21, 21), 3);
+ cv::calcOpticalFlowPyrLK(cur_img, prev_img, cur_pts, reverse_pts, reverse_status, err, cv::Size(LK_SIZE, LK_SIZE), 1, cv::TermCriteria(cv::TermCriteria::COUNT+cv::TermCriteria::EPS, 30, 0.01), cv::OPTFLOW_USE_INITIAL_FLOW);
+ //cv::calcOpticalFlowPyrLK(cur_img, prev_img, cur_pts, reverse_pts, reverse_status, err, cv::Size(LK_SIZE, LK_SIZE), 3);
for(size_t i = 0; i < status.size(); i++)
{
- if(status[i] && reverse_status[i] && distance(prev_pts[i], reverse_pts[i]) <= 0.5)
- {
+ if(status[i] && reverse_status[i] && distance_2(prev_pts[i], reverse_pts[i]) <= 0.25)
status[i] = 1;
- }
else
status[i] = 0;
}
@@ -167,18 +172,17 @@ map>>> FeatureTracker::trackIm
}
else
{
- TicToc t_og;
- cv::cuda::GpuMat prev_gpu_img(prev_img);
- cv::cuda::GpuMat cur_gpu_img(cur_img);
+ // TicToc t_og;
+ // cv::cuda::GpuMat cur_gpu_img(cur_img);
+ // cv::cuda::GpuMat prev_gpu_img(prev_img);
cv::cuda::GpuMat prev_gpu_pts(prev_pts);
cv::cuda::GpuMat cur_gpu_pts(cur_pts);
cv::cuda::GpuMat gpu_status;
if(hasPrediction)
{
cur_gpu_pts = cv::cuda::GpuMat(predict_pts);
- cv::Ptr d_pyrLK_sparse = cv::cuda::SparsePyrLKOpticalFlow::create(
- cv::Size(21, 21), 1, 30, true);
- d_pyrLK_sparse->calc(prev_gpu_img, cur_gpu_img, prev_gpu_pts, cur_gpu_pts, gpu_status);
+ // cv::Ptr d_pyrLK_sparse_prediction = cv::cuda::SparsePyrLKOpticalFlow::create(cv::Size(LK_SIZE, LK_SIZE), 1, 30, true);
+ d_pyrLK_sparse_prediction->calc(prev_gpu_img, cur_gpu_img, prev_gpu_pts, cur_gpu_pts, gpu_status);
vector tmp_cur_pts(cur_gpu_pts.cols);
cur_gpu_pts.download(tmp_cur_pts);
@@ -196,8 +200,7 @@ map>>> FeatureTracker::trackIm
}
if (succ_num < 10)
{
- cv::Ptr d_pyrLK_sparse = cv::cuda::SparsePyrLKOpticalFlow::create(
- cv::Size(21, 21), 3, 30, false);
+ // cv::Ptr d_pyrLK_sparse = cv::cuda::SparsePyrLKOpticalFlow::create(cv::Size(LK_SIZE, LK_SIZE), LK_N, 30, false);
d_pyrLK_sparse->calc(prev_gpu_img, cur_gpu_img, prev_gpu_pts, cur_gpu_pts, gpu_status);
vector tmp1_cur_pts(cur_gpu_pts.cols);
@@ -211,8 +214,7 @@ map>>> FeatureTracker::trackIm
}
else
{
- cv::Ptr d_pyrLK_sparse = cv::cuda::SparsePyrLKOpticalFlow::create(
- cv::Size(21, 21), 3, 30, false);
+ // cv::Ptr d_pyrLK_sparse = cv::cuda::SparsePyrLKOpticalFlow::create(cv::Size(LK_SIZE, LK_SIZE), LK_N, 30, false);
d_pyrLK_sparse->calc(prev_gpu_img, cur_gpu_img, prev_gpu_pts, cur_gpu_pts, gpu_status);
vector tmp1_cur_pts(cur_gpu_pts.cols);
@@ -227,9 +229,8 @@ map>>> FeatureTracker::trackIm
{
cv::cuda::GpuMat reverse_gpu_status;
cv::cuda::GpuMat reverse_gpu_pts = prev_gpu_pts;
- cv::Ptr d_pyrLK_sparse = cv::cuda::SparsePyrLKOpticalFlow::create(
- cv::Size(21, 21), 1, 30, true);
- d_pyrLK_sparse->calc(cur_gpu_img, prev_gpu_img, cur_gpu_pts, reverse_gpu_pts, reverse_gpu_status);
+ // cv::Ptr d_pyrLK_sparse_prediction = cv::cuda::SparsePyrLKOpticalFlow::create(cv::Size(LK_SIZE, LK_SIZE), 1, 30, true);
+ d_pyrLK_sparse_prediction->calc(cur_gpu_img, prev_gpu_img, cur_gpu_pts, reverse_gpu_pts, reverse_gpu_status);
vector reverse_pts(reverse_gpu_pts.cols);
reverse_gpu_pts.download(reverse_pts);
@@ -239,7 +240,7 @@ map>>> FeatureTracker::trackIm
for(size_t i = 0; i < status.size(); i++)
{
- if(status[i] && reverse_status[i] && distance(prev_pts[i], reverse_pts[i]) <= 0.5)
+ if(status[i] && reverse_status[i] && distance_2(prev_pts[i], reverse_pts[i]) <= 0.25)
{
status[i] = 1;
}
@@ -257,9 +258,6 @@ map>>> FeatureTracker::trackIm
reduceVector(cur_pts, status);
reduceVector(ids, status);
reduceVector(track_cnt, status);
- // ROS_DEBUG("temporal optical flow costs: %fms", t_o.toc());
-
- //printf("track cnt %d\n", (int)ids.size());
}
for (auto &n : track_cnt)
@@ -267,14 +265,8 @@ map>>> FeatureTracker::trackIm
if (1)
{
- //rejectWithF();
- ROS_DEBUG("set mask begins");
- TicToc t_m;
setMask();
- // ROS_DEBUG("set mask costs %fms", t_m.toc());
- // printf("set mask costs %fms\n", t_m.toc());
- ROS_DEBUG("detect feature begins");
-
+
int n_max_cnt = MAX_CNT - static_cast(cur_pts.size());
if(!USE_GPU)
{
@@ -287,7 +279,7 @@ map>>> FeatureTracker::trackIm
cout << "mask type wrong " << endl;
cv::goodFeaturesToTrack(cur_img, n_pts, MAX_CNT - cur_pts.size(), 0.01, MIN_DIST, mask);
// printf("good feature to track costs: %fms\n", t_t.toc());
- std::cout << "n_pts size: "<< n_pts.size()<>>> FeatureTracker::trackIm
n_pts.clear();
}
- ROS_DEBUG("add feature begins");
- TicToc t_a;
+ // ROS_DEBUG("add feature begins");
+ // TicToc t_a;
addPoints();
// ROS_DEBUG("selectFeature costs: %fms", t_a.toc());
// printf("selectFeature costs: %fms\n", t_a.toc());
@@ -352,17 +344,19 @@ map>>> FeatureTracker::trackIm
vector status, statusRightLeft;
if(!USE_GPU_ACC_FLOW)
{
- TicToc t_check;
+ // TicToc t_check;
vector err;
// cur left ---- cur right
- cv::calcOpticalFlowPyrLK(cur_img, rightImg, cur_pts, cur_right_pts, status, err, cv::Size(21, 21), 3);
+ cv::calcOpticalFlowPyrLK(cur_img, rightImg, cur_pts, cur_right_pts, status, err, cv::Size(LK_SIZE, LK_SIZE), LK_N);
// reverse check cur right ---- cur left
if(FLOW_BACK)
{
- cv::calcOpticalFlowPyrLK(rightImg, cur_img, cur_right_pts, reverseLeftPts, statusRightLeft, err, cv::Size(21, 21), 3);
+ // cv::calcOpticalFlowPyrLK(rightImg, cur_img, cur_right_pts, reverseLeftPts, statusRightLeft, err, cv::Size(LK_SIZE, LK_SIZE), LK_N);
+ reverseLeftPts = cur_pts;
+ cv::calcOpticalFlowPyrLK(rightImg, cur_img, cur_right_pts, reverseLeftPts, statusRightLeft, err, cv::Size(LK_SIZE, LK_SIZE), 1, cv::TermCriteria(cv::TermCriteria::COUNT+cv::TermCriteria::EPS, 30, 0.01), cv::OPTFLOW_USE_INITIAL_FLOW);
for(size_t i = 0; i < status.size(); i++)
{
- if(status[i] && statusRightLeft[i] && inBorder(cur_right_pts[i]) && distance(cur_pts[i], reverseLeftPts[i]) <= 0.5)
+ if(status[i] && statusRightLeft[i] && inBorder(cur_right_pts[i]) && distance_2(cur_pts[i], reverseLeftPts[i]) <= 0.25)
status[i] = 1;
else
status[i] = 0;
@@ -372,14 +366,13 @@ map>>> FeatureTracker::trackIm
}
else
{
- TicToc t_og1;
- cv::cuda::GpuMat cur_gpu_img(cur_img);
- cv::cuda::GpuMat right_gpu_Img(rightImg);
+ // TicToc t_og1;
+ // cv::cuda::GpuMat cur_gpu_img(cur_img);
+ // cv::cuda::GpuMat right_gpu_Img(rightImg);
cv::cuda::GpuMat cur_gpu_pts(cur_pts);
cv::cuda::GpuMat cur_right_gpu_pts;
cv::cuda::GpuMat gpu_status;
- cv::Ptr d_pyrLK_sparse = cv::cuda::SparsePyrLKOpticalFlow::create(
- cv::Size(21, 21), 3, 30, false);
+ // cv::Ptr d_pyrLK_sparse = cv::cuda::SparsePyrLKOpticalFlow::create(cv::Size(LK_SIZE, LK_SIZE), LK_N, 30, false);
d_pyrLK_sparse->calc(cur_gpu_img, right_gpu_Img, cur_gpu_pts, cur_right_gpu_pts, gpu_status);
vector tmp_cur_right_pts(cur_right_gpu_pts.cols);
@@ -394,9 +387,10 @@ map>>> FeatureTracker::trackIm
{
cv::cuda::GpuMat reverseLeft_gpu_Pts;
cv::cuda::GpuMat status_gpu_RightLeft;
- cv::Ptr d_pyrLK_sparse = cv::cuda::SparsePyrLKOpticalFlow::create(
- cv::Size(21, 21), 3, 30, false);
- d_pyrLK_sparse->calc(right_gpu_Img, cur_gpu_img, cur_right_gpu_pts, reverseLeft_gpu_Pts, status_gpu_RightLeft);
+ // cv::Ptr d_pyrLK_sparse = cv::cuda::SparsePyrLKOpticalFlow::create(cv::Size(LK_SIZE, LK_SIZE), LK_N, 30, false);
+ reverseLeft_gpu_Pts = cur_right_gpu_pts;
+ // cv::Ptr d_pyrLK_sparse_prediction = cv::cuda::SparsePyrLKOpticalFlow::create(cv::Size(LK_SIZE, LK_SIZE), 1, 30, true);
+ d_pyrLK_sparse_prediction->calc(right_gpu_Img, cur_gpu_img, cur_right_gpu_pts, reverseLeft_gpu_Pts, status_gpu_RightLeft);
vector tmp_reverseLeft_Pts(reverseLeft_gpu_Pts.cols);
reverseLeft_gpu_Pts.download(tmp_reverseLeft_Pts);
@@ -407,7 +401,7 @@ map>>> FeatureTracker::trackIm
statusRightLeft = tmp1_status;
for(size_t i = 0; i < status.size(); i++)
{
- if(status[i] && statusRightLeft[i] && inBorder(cur_right_pts[i]) && distance(cur_pts[i], reverseLeftPts[i]) <= 0.5)
+ if(status[i] && statusRightLeft[i] && inBorder(cur_right_pts[i]) && distance_2(cur_pts[i], reverseLeftPts[i]) <= 0.25)
status[i] = 1;
else
status[i] = 0;
@@ -442,6 +436,9 @@ map>>> FeatureTracker::trackIm
prev_time = cur_time;
hasPrediction = false;
+ if (USE_GPU_ACC_FLOW)
+ prev_gpu_img = cur_gpu_img;
+
prevLeftPtsMap.clear();
for(size_t i = 0; i < cur_pts.size(); i++)
prevLeftPtsMap[ids[i]] = cur_pts[i];
@@ -449,6 +446,8 @@ map>>> FeatureTracker::trackIm
map>>> featureFrame;
for (size_t i = 0; i < ids.size(); i++)
{
+ if(std::isnan(cur_un_pts[i].x) || std::isnan(cur_un_pts[i].y))
+ continue;
int feature_id = ids[i];
double x, y ,z;
x = cur_un_pts[i].x;
@@ -471,6 +470,8 @@ map>>> FeatureTracker::trackIm
{
for (size_t i = 0; i < ids_right.size(); i++)
{
+ if(std::isnan(cur_un_right_pts[i].x) || std::isnan(cur_un_right_pts[i].y))
+ continue;
int feature_id = ids_right[i];
double x, y ,z;
x = cur_un_right_pts[i].x;
@@ -490,7 +491,7 @@ map>>> FeatureTracker::trackIm
}
}
- //printf("feature track whole time %f\n", t_r.toc());
+ // printf("feature track whole time %f\n", t_r.toc());
return featureFrame;
}
@@ -641,7 +642,7 @@ void FeatureTracker::drawTrack(const cv::Mat &imLeft, const cv::Mat &imRight,
cv::hconcat(imLeft, imRight, imTrack);
else
imTrack = imLeft.clone();
- cv::cvtColor(imTrack, imTrack, CV_GRAY2RGB);
+ cv::cvtColor(imTrack, imTrack, cv::COLOR_GRAY2RGB);
for (size_t j = 0; j < curLeftPts.size(); j++)
{
@@ -735,4 +736,4 @@ void FeatureTracker::removeOutliers(set &removePtsIds)
cv::Mat FeatureTracker::getTrackImage()
{
return imTrack;
-}
\ No newline at end of file
+}
diff --git a/vins_estimator/src/featureTracker/feature_tracker.h b/vins_estimator/src/featureTracker/feature_tracker.h
index 1fd594c..a33645c 100644
--- a/vins_estimator/src/featureTracker/feature_tracker.h
+++ b/vins_estimator/src/featureTracker/feature_tracker.h
@@ -86,4 +86,8 @@ class FeatureTracker
bool stereo_cam;
int n_id;
bool hasPrediction;
+
+ cv::cuda::GpuMat prev_gpu_img;
+ cv::Ptr d_pyrLK_sparse;
+ cv::Ptr d_pyrLK_sparse_prediction;
};