Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The training time at stage 2 using our own dataset was too long #41

Open
2019213039 opened this issue Sep 26, 2024 · 0 comments
Open

The training time at stage 2 using our own dataset was too long #41

2019213039 opened this issue Sep 26, 2024 · 0 comments

Comments

@2019213039
Copy link

First of all, I really want to thank you for your work. It's a great job!

The problem I ran into was the time it took me to train stage2 using my own dataset (converted to DTU dataset format). My own dataset consists of 67 views. My environment configuration is the same as your recommended environment. The device used is an RTX4060 with 8G video memory. My model had a total of 227,970 3D Gaussians after completing the first stage of training.
Your paper says that the second phase of training should be very rapid:

Speed and Memory
To train a NeRF synthetic scene, our model typically requires approximately 16 minutes and 10 GB of memory. Notably, the visibility baking in the second stage is remarkably brief, lasting only a few seconds. Following this optimization, our model attains real-time rendering across different lighting conditions, achieving 120 frames per second.

Is the training time too long because my GPU memory is too low? If yes, what can I do to speed up the second phase of training?

Please let me know if you need more information from me. My wechat ID is lyc1600378878

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant