You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
drops_ dose not be reset after Forward and Backward, it still keeps the value of last Forward.
I suggest to reset drops_ at the beginning of Forward.
this is my implementation of FractalNet with global and local drop in caffe:https://github.com/KangGrandesty/fractalnet.
It may be useful for you.
The text was updated successfully, but these errors were encountered:
@KangGrandesty Hi,I read your code, its cool but i have one question.
In fractal_join_layer.cpp, I dont understand why "Dtype mult = Dtype(bottom_size)/Dtype(total_undrop_)"
I think Dtype mult = Dtype(1)/Dtype(total_undrop_), that makes the top is element wise mean of the bottom
@zhanglonghao1992 I just set it like the dropout layer. if half of neurons are dropout randomly, the rest will be scale to sure the output has a closer expected value with non-dropout. Then, I change it to element-wise mean and modify the code.
drops_
dose not be reset after Forward and Backward, it still keeps the value of last Forward.I suggest to reset
drops_
at the beginning of Forward.this is my implementation of FractalNet with global and local drop in caffe:https://github.com/KangGrandesty/fractalnet.
It may be useful for you.
The text was updated successfully, but these errors were encountered: