You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The model only handles when image/depth pair have resolution divisible by patch size of 14x14. I wanted to see if i can improve the quality of NYUv2 dataset using prompt DepthAnything model. I wanted to ask how to minimize the error of padding and inferencing the padded image+depth so it is divisible by patch size.
The text was updated successfully, but these errors were encountered:
Modify the io_wrapper.py file to add padding functions and update the load_image and load_depth functions to use padding. This will allow the model to handle any input size by padding to the nearest multiple of 14 and then removing the padding after inference.
Disclaimer: The concept of solution was created by AI and you should never copy paste this code before you check the correctness of generated code. Solution might not be complete, you should use this code as an inspiration only.
Latta AI seeks to solve problems in open source projects as part of its mission to support developers around the world. Learn more about our mission at https://latta.ai/ourmission . If you no longer want Latta AI to attempt solving issues on your repository, you can block this account.
The model only handles when image/depth pair have resolution divisible by patch size of 14x14. I wanted to see if i can improve the quality of NYUv2 dataset using prompt DepthAnything model. I wanted to ask how to minimize the error of padding and inferencing the padded image+depth so it is divisible by patch size.
The text was updated successfully, but these errors were encountered: