Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ORT backend always returns tensor on CPU #73

Closed
aklife97 opened this issue Sep 28, 2021 · 7 comments · Fixed by #98
Closed

ORT backend always returns tensor on CPU #73

aklife97 opened this issue Sep 28, 2021 · 7 comments · Fixed by #98
Assignees

Comments

@aklife97
Copy link

Description
The ORT backend always returns output tensors on CPU even when the instance is on GPU - when run using BLS through the python backend.

Expected behavior
The output tensor should be on the GPU when the instance kind is GPU for the ONNX model.

@CoderHam
Copy link
Contributor

CoderHam commented Sep 28, 2021

@Tabrizian @tanmayv25 can you look into the same

@askhade
Copy link
Contributor

askhade commented Oct 4, 2021

This issue explains the current limitation and why output is always on CPU : triton-inference-server/server#3364

@askhade
Copy link
Contributor

askhade commented Nov 17, 2021

Once triton-inference-server/server#3364 is merged we will enable output binding to gpus in ort backend.

@Slyne
Copy link

Slyne commented Dec 15, 2021

Any update ?
@askhade
@deadeyegoodwin

@askhade
Copy link
Contributor

askhade commented Jan 25, 2022

@Slyne This will be available in triton 22.02 release.

@askhade askhade self-assigned this Jan 25, 2022
@Slyne
Copy link

Slyne commented Jan 26, 2022

@askhade Thank you for informing!

@vu0607
Copy link

vu0607 commented May 23, 2024

@askhade
I serving encoder-decoder model (TrOCR) on Triton onnx backend. I meet a problem:
First, I call and get output from encoder model in server. Afterthat, because output on GPU, I need to transfer to CPU for converting to numpy before call output from decoder model on server. It make botteneck. Hope you can help me with issue. Thanks a lot.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants