Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimizing hyperparameters #19

Open
sdalumlarsen opened this issue Oct 29, 2024 · 1 comment
Open

Optimizing hyperparameters #19

sdalumlarsen opened this issue Oct 29, 2024 · 1 comment

Comments

@sdalumlarsen
Copy link

Hi SUPPORT,

I have used your system extensively on a number of volumetric datasets and I am very pleased with the result. However, I would still like to see if I can improve the denoising. Obviously, some parameters, such as the blind spot size, are VERY dependent on the nature of the data, but I was wondering if the default values for the capacity given by channel sizes, the depth and the batch size were a tradeoff between performance and training/inference time or if they actually represent an approximate optimum for performance in the face of overfitting etc. This would be for large volumetric datasets with a size of lets say (1500,1500,10000).

If you would prefer, we can communicate by email as well, I just thought any potential answers could be useful to others.

Thank you for your time and this wonderful tool.

@trose-neuro
Copy link

trose-neuro commented Jan 16, 2025

Hi! Let me bump this. It would be great to get some input on tuning, e.g., bs_size, patch_size, etc.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants