You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to quantize MobilnetV2 using 4-bit width, but when I run print_qstats(model), I am getting an error "A merge layer should be called on a list of inputs"
Additionally, is there a way to implement Relu6 from Qkeras? I am also trying to implement 2-bit Quantization model in the same architecture, and the accuracy is very low ( fluctuating between 15-20% accuracy). I was wondering if you have any tips for full 2 bit quantization model. I have for now been using "QConv2D(kernel_quantizer=quantized_bits(2,2),
bias_quantizer=quantized_po2(2))" with respective activation functions and Batchnormalization
The text was updated successfully, but these errors were encountered:
I am trying to quantize MobilnetV2 using 4-bit width, but when I run print_qstats(model), I am getting an error "A merge layer should be called on a list of inputs"

Additionally, is there a way to implement Relu6 from Qkeras? I am also trying to implement 2-bit Quantization model in the same architecture, and the accuracy is very low ( fluctuating between 15-20% accuracy). I was wondering if you have any tips for full 2 bit quantization model. I have for now been using "QConv2D(kernel_quantizer=quantized_bits(2,2),
bias_quantizer=quantized_po2(2))" with respective activation functions and Batchnormalization
The text was updated successfully, but these errors were encountered: