You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have used your system extensively on a number of volumetric datasets and I am very pleased with the result. However, I would still like to see if I can improve the denoising. Obviously, some parameters, such as the blind spot size, are VERY dependent on the nature of the data, but I was wondering if the default values for the capacity given by channel sizes, the depth and the batch size were a tradeoff between performance and training/inference time or if they actually represent an approximate optimum for performance in the face of overfitting etc. This would be for large volumetric datasets with a size of lets say (1500,1500,10000).
If you would prefer, we can communicate by email as well, I just thought any potential answers could be useful to others.
Thank you for your time and this wonderful tool.
The text was updated successfully, but these errors were encountered:
Hi SUPPORT,
I have used your system extensively on a number of volumetric datasets and I am very pleased with the result. However, I would still like to see if I can improve the denoising. Obviously, some parameters, such as the blind spot size, are VERY dependent on the nature of the data, but I was wondering if the default values for the capacity given by channel sizes, the depth and the batch size were a tradeoff between performance and training/inference time or if they actually represent an approximate optimum for performance in the face of overfitting etc. This would be for large volumetric datasets with a size of lets say (1500,1500,10000).
If you would prefer, we can communicate by email as well, I just thought any potential answers could be useful to others.
Thank you for your time and this wonderful tool.
The text was updated successfully, but these errors were encountered: