-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Pose Estimation Support for Jetsons #62
Comments
First, I started with the NVIDIA-AI-IOT/trt_pose project. There are several pretrained models on their repo and I could easily make a dockerfile to run it using their instruction. |
I continued this task with OpenPifPaf project, as I found out it has the best results for our task. I decided to export openpifpaf model as ONNX format and then generate a TensorRT engine from that onnx model.
I used ONNX-TensorRT repo to convert onnx model. Since I had jetpack 4.3 (TensorRT 6.0.1) installed on my Jetson devices, I used 6.0-full-dims tag.
ERROR: make[2]: * No rule to make target '/usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so', needed by 'libnvonnxparser.so.6.0.1'. Stop Solve with:
And finally TensorRT generation was performed using command bellow:
Tips:
|
I tried to run existing pose estimation algorithms on Jetson devices as what @alpha-carinae29 has done for X86, GPU and Coral. I’m going to write my results and issues here.
The text was updated successfully, but these errors were encountered: