Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running TFLite Inference Offline #524

Closed
1 task done
volome opened this issue Jan 8, 2024 · 4 comments
Closed
1 task done

Running TFLite Inference Offline #524

volome opened this issue Jan 8, 2024 · 4 comments
Labels
question Further information is requested

Comments

@volome
Copy link

volome commented Jan 8, 2024

Search before asking

Question

If I train a model and need to convert it to say TFlite and use it on my embedded hardware that has no internet connection and just my own app. Can I do that and is that an extra cost.

Additional

This is not very clear in your movie, you always refer using our app.

@volome volome added the question Further information is requested label Jan 8, 2024
Copy link

github-actions bot commented Jan 8, 2024

👋 Hello @volome, thank you for raising an issue about Ultralytics HUB 🚀! Please visit our HUB Docs to learn more:

  • Quickstart. Start training and deploying YOLO models with HUB in seconds.
  • Datasets: Preparing and Uploading. Learn how to prepare and upload your datasets to HUB in YOLO format.
  • Projects: Creating and Managing. Group your models into projects for improved organization.
  • Models: Training and Exporting. Train YOLOv5 and YOLOv8 models on your custom datasets and export them to various formats for deployment.
  • Integrations. Explore different integration options for your trained models, such as TensorFlow, ONNX, OpenVINO, CoreML, and PaddlePaddle.
  • Ultralytics HUB App. Learn about the Ultralytics App for iOS and Android, which allows you to run models directly on your mobile device.
    • iOS. Learn about YOLO CoreML models accelerated on Apple's Neural Engine on iPhones and iPads.
    • Android. Explore TFLite acceleration on mobile devices.
  • Inference API. Understand how to use the Inference API for running your trained models in the cloud to generate predictions.

If this is a 🐛 Bug Report, please provide screenshots and steps to reproduce your problem to help us get started working on a fix.

If this is a ❓ Question, please provide as much information as possible, including dataset, model, environment details etc. so that we might provide the most helpful response.

We try to respond to all issues as promptly as possible. Thank you for your patience!

@UltralyticsAssistant
Copy link
Member

@volome hello!

Thank you for reaching out with your question. Yes, you can train a model and convert it to TFLite for use on your embedded hardware without an internet connection. There is no extra cost associated with converting your model to TFLite and deploying it on your hardware for offline use. The AGPL-3.0 license allows for such use cases, provided you comply with the terms of the license.

For detailed guidance on the conversion process, please refer to the Ultralytics HUB Docs. If you have any further questions or need assistance, feel free to ask. We're here to help!

@volome
Copy link
Author

volome commented Jan 8, 2024 via email

@UltralyticsAssistant
Copy link
Member

You're very welcome, @volome! I'm glad I could assist you. 😊 If you have any more questions in the future or need further support, don't hesitate to reach out. Good luck with your project, and have a great time working with your model on your embedded hardware!

@kalenmike kalenmike changed the title Hi Running TFLite Inference Offline Jan 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants