You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This application is working fine with mobilenet_quant_v1_224.tflite model. I've trained custom model following Tensorflow for Poet Google Codelab and created graph using this script:
IMAGE_SIZE=224
ARCHITECTURE="mobilenet_0.50_${IMAGE_SIZE}"
python -m scripts.retrain
--bottleneck_dir=tf_files/bottlenecks
--how_many_training_steps=500
--model_dir=tf_files/models/
--summaries_dir=tf_files/training_summaries/"${ARCHITECTURE}"
--output_graph=tf_files/retrained_graph.pb
--output_labels=tf_files/retrained_labels.txt
--architecture="${ARCHITECTURE}"
--image_dir=tf_files/flower_photos
and for this Android Things sample to train Lite model I've followed tensorflow-for-poets-2-tflite google Codelab and converted using this script
toco
--input_file=tf_files/retrained_graph.pb
--output_file=tf_files/optimized_graph.lite
--input_format=TENSORFLOW_GRAPHDEF
--output_format=TFLITE
--input_shape=1,${IMAGE_SIZE},${IMAGE_SIZE},3
--input_array=input
--output_array=final_result
--inference_type=FLOAT
--input_data_type=FLOAT
after capturing from raspberry pi 3 model b it is giving me this error 2018-06-28 12:13:09.115 7685-7735/com.example.androidthings.imageclassifier E/AndroidRuntime: FATAL EXCEPTION: BackgroundThread
Process: com.example.androidthings.imageclassifier, PID: 7685
java.lang.IllegalArgumentException: Failed to get input dimensions. 0-th input should have 602112 bytes, but found 150528 bytes.
at org.tensorflow.lite.NativeInterpreterWrapper.getInputDims(Native Method)
at org.tensorflow.lite.NativeInterpreterWrapper.run(NativeInterpreterWrapper.java:98)
at org.tensorflow.lite.Interpreter.runForMultipleInputsOutputs(Interpreter.java:142)
at org.tensorflow.lite.Interpreter.run(Interpreter.java:120)
at com.example.androidthings.tensorflow.classifier.TensorFlowImageClassifier.doRecognize(TensorFlowImageClassifier.java:99)
at com.example.androidthings.tensorflow.ImageClassifierActivity.onImageAvailable(ImageClassifierActivity.java:244)
at android.media.ImageReader$ListenerHandler.handleMessage(ImageReader.java:812)
at android.os.Handler.dispatchMessage(Handler.java:106)
at android.os.Looper.loop(Looper.java:164)
at android.os.HandlerThread.run(HandlerThread.java:65)
Please help with this and i am a beginner with tensorflow.
The text was updated successfully, but these errors were encountered:
@iskuhis the reason is because original model for androidthings has quantized inputs, so instead of 4 bytes it has only one. You should either switch back to 4 bytes, or adapt how you export your model
This application is working fine with mobilenet_quant_v1_224.tflite model. I've trained custom model following Tensorflow for Poet Google Codelab and created graph using this script:
IMAGE_SIZE=224
ARCHITECTURE="mobilenet_0.50_${IMAGE_SIZE}"
python -m scripts.retrain
--bottleneck_dir=tf_files/bottlenecks
--how_many_training_steps=500
--model_dir=tf_files/models/
--summaries_dir=tf_files/training_summaries/"${ARCHITECTURE}"
--output_graph=tf_files/retrained_graph.pb
--output_labels=tf_files/retrained_labels.txt
--architecture="${ARCHITECTURE}"
--image_dir=tf_files/flower_photos
and for this Android Things sample to train Lite model I've followed tensorflow-for-poets-2-tflite google Codelab and converted using this script
toco
--input_file=tf_files/retrained_graph.pb
--output_file=tf_files/optimized_graph.lite
--input_format=TENSORFLOW_GRAPHDEF
--output_format=TFLITE
--input_shape=1,${IMAGE_SIZE},${IMAGE_SIZE},3
--input_array=input
--output_array=final_result
--inference_type=FLOAT
--input_data_type=FLOAT
after capturing from raspberry pi 3 model b it is giving me this error
2018-06-28 12:13:09.115 7685-7735/com.example.androidthings.imageclassifier E/AndroidRuntime: FATAL EXCEPTION: BackgroundThread
Process: com.example.androidthings.imageclassifier, PID: 7685
java.lang.IllegalArgumentException: Failed to get input dimensions. 0-th input should have 602112 bytes, but found 150528 bytes.
at org.tensorflow.lite.NativeInterpreterWrapper.getInputDims(Native Method)
at org.tensorflow.lite.NativeInterpreterWrapper.run(NativeInterpreterWrapper.java:98)
at org.tensorflow.lite.Interpreter.runForMultipleInputsOutputs(Interpreter.java:142)
at org.tensorflow.lite.Interpreter.run(Interpreter.java:120)
at com.example.androidthings.tensorflow.classifier.TensorFlowImageClassifier.doRecognize(TensorFlowImageClassifier.java:99)
at com.example.androidthings.tensorflow.ImageClassifierActivity.onImageAvailable(ImageClassifierActivity.java:244)
at android.media.ImageReader$ListenerHandler.handleMessage(ImageReader.java:812)
at android.os.Handler.dispatchMessage(Handler.java:106)
at android.os.Looper.loop(Looper.java:164)
at android.os.HandlerThread.run(HandlerThread.java:65)
Please help with this and i am a beginner with tensorflow.
The text was updated successfully, but these errors were encountered: