Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Custom Trained Model Not Working #20

Open
meetmustafa opened this issue Jun 28, 2018 · 4 comments
Open

Custom Trained Model Not Working #20

meetmustafa opened this issue Jun 28, 2018 · 4 comments

Comments

@meetmustafa
Copy link

meetmustafa commented Jun 28, 2018

This application is working fine with mobilenet_quant_v1_224.tflite model. I've trained custom model following Tensorflow for Poet Google Codelab and created graph using this script:
IMAGE_SIZE=224
ARCHITECTURE="mobilenet_0.50_${IMAGE_SIZE}"
python -m scripts.retrain
--bottleneck_dir=tf_files/bottlenecks
--how_many_training_steps=500
--model_dir=tf_files/models/
--summaries_dir=tf_files/training_summaries/"${ARCHITECTURE}"
--output_graph=tf_files/retrained_graph.pb
--output_labels=tf_files/retrained_labels.txt
--architecture="${ARCHITECTURE}"
--image_dir=tf_files/flower_photos

and for this Android Things sample to train Lite model I've followed tensorflow-for-poets-2-tflite google Codelab and converted using this script
toco
--input_file=tf_files/retrained_graph.pb
--output_file=tf_files/optimized_graph.lite
--input_format=TENSORFLOW_GRAPHDEF
--output_format=TFLITE
--input_shape=1,${IMAGE_SIZE},${IMAGE_SIZE},3
--input_array=input
--output_array=final_result
--inference_type=FLOAT
--input_data_type=FLOAT

after capturing from raspberry pi 3 model b it is giving me this error
2018-06-28 12:13:09.115 7685-7735/com.example.androidthings.imageclassifier E/AndroidRuntime: FATAL EXCEPTION: BackgroundThread
Process: com.example.androidthings.imageclassifier, PID: 7685
java.lang.IllegalArgumentException: Failed to get input dimensions. 0-th input should have 602112 bytes, but found 150528 bytes.
at org.tensorflow.lite.NativeInterpreterWrapper.getInputDims(Native Method)
at org.tensorflow.lite.NativeInterpreterWrapper.run(NativeInterpreterWrapper.java:98)
at org.tensorflow.lite.Interpreter.runForMultipleInputsOutputs(Interpreter.java:142)
at org.tensorflow.lite.Interpreter.run(Interpreter.java:120)
at com.example.androidthings.tensorflow.classifier.TensorFlowImageClassifier.doRecognize(TensorFlowImageClassifier.java:99)
at com.example.androidthings.tensorflow.ImageClassifierActivity.onImageAvailable(ImageClassifierActivity.java:244)
at android.media.ImageReader$ListenerHandler.handleMessage(ImageReader.java:812)
at android.os.Handler.dispatchMessage(Handler.java:106)
at android.os.Looper.loop(Looper.java:164)
at android.os.HandlerThread.run(HandlerThread.java:65)

Please help with this and i am a beginner with tensorflow.

@aashutoshrathi
Copy link

Same here, not able to produce a custom tflite file, rather I can only get the .lite file and this is not working.

@iskuhis
Copy link

iskuhis commented Jul 28, 2018

Yes, I am struggling with that couple of hours but without result

@aashutoshrathi
Copy link

@iskuhis I fixed it you can check at https://github.com/aashutoshrathi/vision

@lc0
Copy link

lc0 commented Oct 20, 2018

@iskuhis the reason is because original model for androidthings has quantized inputs, so instead of 4 bytes it has only one. You should either switch back to 4 bytes, or adapt how you export your model

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants