You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying to quantize the Encoder Stage of a Convolutional Autoencoder with AutoQKeras. My goal is to reduce the number of bits in those Layers. Now I'm not sure how to correctly call AutoQKeras in order to minimize the loss and get the best Model. My current Code:
When I train the Autoencoder on the Dataset without using AutoQKeras it works fine, however after quantizing and retrieving the best Model, the output of the prediction is all black.
I suspect I need to pass some argument, that AutoQKeras knows it should minimize the loss?
Best Regards,
Lukas
The text was updated successfully, but these errors were encountered:
Hello and thank you for your work of the Project!
I'm trying to quantize the Encoder Stage of a Convolutional Autoencoder with AutoQKeras. My goal is to reduce the number of bits in those Layers. Now I'm not sure how to correctly call AutoQKeras in order to minimize the loss and get the best Model. My current Code:
When I train the Autoencoder on the Dataset without using AutoQKeras it works fine, however after quantizing and retrieving the best Model, the output of the prediction is all black.
I suspect I need to pass some argument, that AutoQKeras knows it should minimize the loss?
Best Regards,
Lukas
The text was updated successfully, but these errors were encountered: