-
Notifications
You must be signed in to change notification settings - Fork 58
Issues: triton-inference-server/onnxruntime_backend
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Improve autocomplete to make it more robust against partial model configuration
#113
opened Apr 20, 2022 by
tanmayv25
CPU inference is much slower than with ONNX Runtime directly
more-info-needed
Waiting for more information
#34
opened Mar 19, 2021 by
artmatsak
In Dockerfile gen script, CUDNN_VERSION should be obtained from docker image
#52
opened Jul 13, 2021 by
GuanLuo
Model loading failure: densenet_onnx fails to load due to "pthread_setaffinity_np" failure
#86
opened Nov 30, 2021 by
shrek
Not able to load simple iris model: Getting error:
Unsupported ONNX Type 'ONNX_TYPE_SEQUENCE'
#94
opened Jan 12, 2022 by
KshitizLohia
Expose all string key/value configs instead of doing it piecemeal.
enhancement
New feature or request
#107
opened Mar 17, 2022 by
pranavsharma
Previous Next
ProTip!
Find all open issues with in progress development work with linked:pr.