You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am using autoware with a custom CAD, vehicle and sensor kit. In the sensor kit, there are a RGBD frontal camera and a center-upper LiDAR. During the perception phase, setting it in camera_lidar_fusion working mode, if the obstacle goes away from the frontal position, out of the FOV of the camera, the perception is interrupted: both the detection and the tracking, and there are also some not detected elements in the back. I can imagine that the use of only one camera determines this condition.
However, is there a possibility to make it work only using the lidar in case no camera info is received (always in camera_lidar_fusion mode)? Differently, how many cameras should I have to obtain good results and accuracy in the perception phase?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I am using autoware with a custom CAD, vehicle and sensor kit. In the sensor kit, there are a RGBD frontal camera and a center-upper LiDAR. During the perception phase, setting it in camera_lidar_fusion working mode, if the obstacle goes away from the frontal position, out of the FOV of the camera, the perception is interrupted: both the detection and the tracking, and there are also some not detected elements in the back. I can imagine that the use of only one camera determines this condition.
However, is there a possibility to make it work only using the lidar in case no camera info is received (always in camera_lidar_fusion mode)? Differently, how many cameras should I have to obtain good results and accuracy in the perception phase?
Beta Was this translation helpful? Give feedback.
All reactions