-
-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to get the detection result from this model #556
Comments
👋 Hello @MarcoHuixxx, thank you for raising an issue about Ultralytics HUB 🚀! Please visit our HUB Docs to learn more:
If this is a 🐛 Bug Report, please provide screenshots and steps to reproduce your problem to help us get started working on a fix. If this is a ❓ Question, please provide as much information as possible, including dataset, model, environment details etc. so that we might provide the most helpful response. We try to respond to all issues as promptly as possible. Thank you for your patience! |
anyone can help me!!! |
please |
@MarcoHuixxx hello! It looks like you're trying to interpret the output from the model after running To extract meaningful detection information from this output, you typically need to post-process the data. This involves applying certain thresholds to filter out low-confidence detections, and then decoding the tensor to retrieve the bounding box coordinates, class IDs, and confidence scores for each detection. The exact post-processing steps can vary depending on the model's output format. Generally, you would:
Please refer to the Ultralytics HUB Docs for guidance on post-processing the model's output. The documentation should provide you with the necessary steps and explanations on how to handle the output tensor to get the detection results you're looking for. If you're still having trouble, feel free to provide more details about the output you're getting, and I'll do my best to assist you further. 😊 |
@MarcoHuixxx the output format you're describing from the TensorFlow Lite model is indeed different from the Ultralytics model output. TensorFlow Lite models often provide separate arrays for boxes, classes, scores, and number of detections, which can be directly interpreted. In contrast, Ultralytics models typically output a single tensor that contains all the detection information. This single tensor needs to be post-processed to separate out the bounding box coordinates, class IDs, and confidence scores. The differences in output format are due to the way each model architecture is designed and how the outputs are structured. Ultralytics models are optimized for performance and may use a different output encoding to maximize speed and efficiency. Regarding the documentation for post-processing the model's output, I apologize for any confusion. The Ultralytics HUB Docs should contain a section on how to interpret and handle the model outputs, including post-processing steps. If you're unable to find the relevant information, it's possible that the documentation may need to be updated to include these details. As I'm unable to provide direct links or attachments here, I recommend checking the Ultralytics HUB Docs again for any updates or additional information that may have been added. If the information is still missing, please consider opening an issue on the GitHub repository to request detailed documentation on post-processing the outputs of Ultralytics models. The team values user feedback and strives to improve the documentation to better assist users like you. 😊 |
👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help. For additional resources and information, please see the links below:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed! Thank you for your contributions to YOLO 🚀 and Vision AI ⭐ |
Search before asking
Question
The first pic is the model properties, there should be one float32 output. However, after executing model.runSync([data]). The output is the second pic. I have no idea about how to get the detection result from this model.
![螢幕截圖 2024-01-30 下午9 22 41](https://private-user-images.githubusercontent.com/113924930/302037490-44072cdb-0758-4594-9aca-85544a95b13e.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mzk5ODA4MDEsIm5iZiI6MTczOTk4MDUwMSwicGF0aCI6Ii8xMTM5MjQ5MzAvMzAyMDM3NDkwLTQ0MDcyY2RiLTA3NTgtNDU5NC05YWNhLTg1NTQ0YTk1YjEzZS5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjE5JTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxOVQxNTU1MDFaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT1mMDM4M2U0MTRmYTQ5YTliYWQ4MTc2MzFkNDMzNjdmZmI1ODY1OGQ0ZGQ5NjFkZmI5OGVlMjg5Njk2MWM1MzFhJlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.KZ45tjOy1_G6HNB0uohhkBT-RQ1ELNuHaYg_TP-Zdzg)
![螢幕截圖 2024-01-30 下午9 22 52](https://private-user-images.githubusercontent.com/113924930/302037495-ba7b42f2-0673-4641-81c6-15931463efec.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mzk5ODA4MDEsIm5iZiI6MTczOTk4MDUwMSwicGF0aCI6Ii8xMTM5MjQ5MzAvMzAyMDM3NDk1LWJhN2I0MmYyLTA2NzMtNDY0MS04MWM2LTE1OTMxNDYzZWZlYy5wbmc_WC1BbXotQWxnb3JpdGhtPUFXUzQtSE1BQy1TSEEyNTYmWC1BbXotQ3JlZGVudGlhbD1BS0lBVkNPRFlMU0E1M1BRSzRaQSUyRjIwMjUwMjE5JTJGdXMtZWFzdC0xJTJGczMlMkZhd3M0X3JlcXVlc3QmWC1BbXotRGF0ZT0yMDI1MDIxOVQxNTU1MDFaJlgtQW16LUV4cGlyZXM9MzAwJlgtQW16LVNpZ25hdHVyZT1lMjA4MGMzYjQxMDdjYjY3MzMzMWRkYmRjNDY4OWQ3OGFjY2YxYjA0Y2E2ZmQ0NjM4NjM1M2ZkYjIwYzNiZjQzJlgtQW16LVNpZ25lZEhlYWRlcnM9aG9zdCJ9.ot8_BOrQlE-BDMuLQ7Y6kt_ymIhOgsInQlsKEptKJy0)
Additional
No response
The text was updated successfully, but these errors were encountered: