You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
I'm trying to run inference using the pretrained model on my own data which is gathered from a RealSense camera.
I've updated run_example.py as suggested, updating rgb and depth paths and setting the camera parameters using camera intrinsics values obtained from the librealsense API, but the output normal.png is not as expected:
I have checked my depth data; when I deproject to a point cloud it looks fine.
My question:
I save my depth data as raw uint16 values stored as a binary file, then read this back to a np array where you use cv2.imread - ie, depth_image = np.fromfile("depthdata.bin", dtype="short").reshape(height, width). Are raw depth values like this ok or should I apply a disparity map or something first before passing to SNE? There's no such step for your sample data but I don't know what steps you take to create the sample depth image from your raw depth data.
Thanks for sharing this project and for any help you can offer :)
The text was updated successfully, but these errors were encountered:
Another thought - how noisy can the depth data be? If it's trained on synthetic data, are the depth values assumed to be 'perfect' or are bumps and holes forgiven by the model?
Hi,
I'm trying to run inference using the pretrained model on my own data which is gathered from a RealSense camera.
I've updated run_example.py as suggested, updating rgb and depth paths and setting the camera parameters using camera intrinsics values obtained from the librealsense API, but the output
normal.png
is not as expected:I have checked my depth data; when I deproject to a point cloud it looks fine.
My question:
I save my depth data as raw uint16 values stored as a binary file, then read this back to a np array where you use cv2.imread - ie,
depth_image = np.fromfile("depthdata.bin", dtype="short").reshape(height, width)
. Are raw depth values like this ok or should I apply a disparity map or something first before passing to SNE? There's no such step for your sample data but I don't know what steps you take to create the sample depth image from your raw depth data.Thanks for sharing this project and for any help you can offer :)
The text was updated successfully, but these errors were encountered: