Skip to content

Latest commit

 

History

History
18 lines (18 loc) · 1.89 KB

README.md

File metadata and controls

18 lines (18 loc) · 1.89 KB

drone_nerf

Overview

Repository for CS231A Final Project. We collect data in hardware in the form of image and pose pairs using an IntelRealsense camera and an Optitrack Motion Capture system. We then post process our data by performing image segmentation and converting our poses to relative frames. We finally train a NeRF using these image and pose pairs. Then we use the resulting NeRF to compute a potential trajectory that will make contact with the target.

Data Collection

To collect our data, we use the IntelRealSense ROS Wrapper to collect images, and rosbag the image topic and pose topics.

Post Processing

To segment our images, we run segmentation_code.ipynb, and to convert our poses to be used for NeRF training we run convert_poses.py. We also generate depth bounds using the LLFF codebase to use for training.

Training

In our training we use the NeRF codebase with our own config files located in the training folder.

Trajectory Generation

To generate a potential trajectory through the target object, we modify some code from the original NeRF codebase to find the points corresponding to densities above a certain threshold. We then plan a best fit line through these points to plan a path that is most likely to make contact with the target object.

Results

Inertial Frame poses, Unsegmented images, Hand-held test images:
alt text
Inertial Frame poses, Segmented images, Hand-held test images:
alt text
Relative Frame poses, Segmented images, Hand-held test images:
alt text