-
Notifications
You must be signed in to change notification settings - Fork 728
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
is it possible to convert tf-agents to tf-lite and run on android device #280
Comments
Yes; you should be able to do this. I'm guessing you care about inference (running a policy) more than training (since tflite doesn't support that anyway). See the PolicySaver class. You can use it to export a SavedModel. You can then use the TFLite converter to convert that SavedModel to a TFLite model. Please report back and let us know if this works for you! |
Actually, we plan to do both training and inference on device. Do you guys have plan to support training in near future? Thank you for the response. |
Hi!
We tried to do this (using the DqnAgent.). However, we are receiving the following error when trying to convert the saved model (policy): @ebrevdo Any suggestions? Thanks! |
For "only convert a single ConcreteFunction" this is cause it's trying to use the new MLIR converter. I suggest filing a repro separately with the TensorFlow Issues so they can see this feature is required. @aselle @jdduke fyi. Separately; for now you should be able to use the "old-style" converter (it should work fine). Try passing |
For training on device you cannot do this with TFLite. You must either use the standard TF runtime, or try the (less well supported path) of using the new |
(for aot_compile_cpu; you will need the most recent tf2.2 RC; it's not in TF2.1). |
Thanks for the fast response! --enable_v1_converter works "better", but leads to a different error: (We do not require training on the device.) |
We can add a TODO to be able to create SavedModels out of the Agent.train()
method; but my comments above still apply...
…On Mon, May 4, 2020 at 9:11 AM ebrevdo ***@***.***> wrote:
(for aot_compile_cpu; you will need the most recent tf2.2 RC; it's not in
TF2.1).
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#280 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AANWFG36MMSYA4LDOCQTOXLRP3SKLANCNFSM4KB5SFLA>
.
|
The tflite_convert CLI help doesn't seem to show it, but you can pass a "
--saved_model_signature_key
<https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/python/tflite_convert.py#L385>"
flag, you probably want to point it to "action". If you have an RNN in the
model, you'll also want to create a separate TFLite model for
"get_initial_state" which you would use to initialize the RNN at the
beginning of an episode/sequence and pass as the initial state to "action".
…On Mon, May 4, 2020 at 9:22 AM ebrevdo ***@***.***> wrote:
We can add a TODO to be able to create SavedModels out of the Agent.train()
method; but my comments above still apply...
On Mon, May 4, 2020 at 9:11 AM ebrevdo ***@***.***> wrote:
> (for aot_compile_cpu; you will need the most recent tf2.2 RC; it's not in
> TF2.1).
>
> —
> You are receiving this because you are subscribed to this thread.
> Reply to this email directly, view it on GitHub
> <#280 (comment)
>,
> or unsubscribe
> <
https://github.com/notifications/unsubscribe-auth/AANWFG36MMSYA4LDOCQTOXLRP3SKLANCNFSM4KB5SFLA
>
> .
>
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#280 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AANWFG2D2TV76ACWGFPIXSDRP3TVDANCNFSM4KB5SFLA>
.
|
Great. Thanks.
(Still need to investigate if this tflite model runs as expected on the Android device. I will try to report back.) Thanks. |
@dvdhfnr how are things with implementing your tf agents trained NN on Android? I have this error: "RuntimeError: Encountered unresolved custom op: BroadcastArgs.Node number 0 (BroadcastArgs) failed to prepare." Here the case: https://stackoverflow.com/questions/61715154/tflite-model-load-error-runtimeerror-encountered-unresolved-custom-op-broadca |
When converting with the flag "--allow_custom_ops" you need to implement the ops that are not supported by TFLite by yourself: see e.g. https://www.tensorflow.org/lite/guide/ops_custom Try to convert without "--allow_custom_ops". Then, you will see a list of ops that are not supported. Unfortunately, it seems that we will have to implement those by ourselves. |
@dvdhfnr you are right, the problem is this ops:
|
Currently, I am using the following pipeline:
Since I am actually not interested in saving the policy to a file, I tried to exchange the 2nd and 3rd line with
I noticed that this changes the order of the input tensors. Do I need to take care of other side-effects or is this method safe to use? Moreover, do I need to use the PolicySaver at all or can I just directly create a concrete function ('action') and convert from this? Thanks for your comments! |
There is now a unit test showing how to use policy saver with tflite converter in policy_saver_test.py. does it help? |
Hi @ebrevdo, agents/tf_agents/policies/policy_saver_test.py Lines 358 to 359 in 3448c9e
I guess this "native support for RNG ops, atan, etc." relates to unsupported BroadcastArgs and BroadcastTo ops. Could you please provide more details what is the root cause of the problem (e.g. where are those broadcast coming from)? Maybe it's possible to change something in tf_agents code? Or maybe we can somehow contribute to improve something on TFLite side? Thanks in advance, Regards, |
This has nothing to do with TF-Agents - it depends on TFLite team. @jdduke FYI. Is there a relevant issue open on tf's side? |
I'm not sure where the broadcast args are coming from. possibly from TF Probability? Here's where we use broadcast_to but I don't think these are the real places it's coming from. Probably from a library we're using as I mentioned. |
@thaink is actively working to support this. I'm not sure if there's a corresponding TF issue, but we do have an internal issue tracking this. |
@ebrevdo I think the BroadcastArgs may come from using broadcast_to on a dynamic tensor. |
Thanks guys, please leave here a comment when BroadcastArgs will be available |
@thaink any ETA for this BroadcastArgs issue? :) |
Unfortunately, it is still under review. |
@soldierofhell BroadcastArgs is added to master branch. |
I can convert the model now. Thanks for @thaink 's work. |
We want to implement RL on android device. Just wondering if it is possible to run tf-agents on android or to convert tf-agents to tf-lite. It will be great if someone can share some experience. Thank you!
The text was updated successfully, but these errors were encountered: