-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Parameters for inference on ICL-NUIM and TUM RGB-D data #7
Comments
Hi, the details for running inference on those datasets are the same as for ScanNet. As a baseline, are you able to reproduce the inference results on ScanNet? If you are having trouble reproducing the TUM RGB-D and ICL-NUIM results from the paper my first guess is that you might not be using the correct pose or camera intrinsics. To confirm, are you able to generate coherent point cloud or TSDF reconstructions for those datasets using the ground-truth depth? |
Thank you! |
The trained VoRTX model is expecting the scene's gravitational axis to be aligned with the world Z axis but it the ICL-NUIM poses are aligned to the Y axis instead so you will need to swap the axes. Just make sure you can get a good reconstruction using the ground-truth depth images, with the scene's up axis aligned with world Z, then you know you have the correct pose & intrinsics for VoRTX. |
Ok, got it! |
Great! I don't believe we ever rendered depths from those reconstructions so I never ran into the issue you're describing. Actually I don't really understand why the negative focal length is a problem. Flipping the depth sounds like a good workaround though. You might just want to verify the flipped depth by back-projecting to make sure the points fall exactly on the mesh. |
Ok, thank you! |
If you have the code for ICL-NUIM processing, and reproducing and evaluation the results on this datasets, could you please share it? I tried to inverse the negative focal length and rotate so that z-axis is vertical. The points fall exactly to the mesh after backprojecting and the reconstruction seems quite good. However, I do not get the same evaluation results as you mentioned in the paper. |
Hi! I face the same problem that the reconstruction quality is good but failed to evaluate since the rendered depth is bad. Have you solved this problem now? |
Can you give me the preprocessor or the processed data about TUM-RGBD and ICL-NUIM datasets for evaluation? Please |
Can you solve this problem?I encountered some difficulties in data processing during my undergraduate graduation. Could you please provide me with the pre-processing program or the data after processing about TUM-RGBD or ICL-NUIM in the paper? Thank you |
Can you solve this problem?I encountered some difficulties in data processing during my undergraduate graduation. Could you please provide me with the pre-processing program or the data after processing about TUM-RGBD or ICL-NUIM in the paper? Thank you |
Hello!
Thank you for you work and code!
Could you please explain the details of inference on ICL-NUIM and TUM RGB-D datasets that you described in your paper?
I try to inference the ICL-NUIM scenes using the same checkpoint and the same parameters as written in the config for ScanNet test, but the reconstructions turn out be much worse and I do not get the metrics you got.
Thank you in advance
The text was updated successfully, but these errors were encountered: