Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Selecting GPU1 alone resulted in server crash!!! #1243

Open
endluo opened this issue Feb 11, 2025 · 2 comments
Open

Selecting GPU1 alone resulted in server crash!!! #1243

endluo opened this issue Feb 11, 2025 · 2 comments

Comments

@endluo
Copy link

endluo commented Feb 11, 2025

I used GPU0 to load both fasterwhisper and gendermodel ("device_index=[0]", "cuda: 0") simultaneously, and the server worked properly. However, when I used GPU1 to load both fasterwhisper and gendermodel ("device_index=[1]", "cuda: 1") simultaneously, the server crashed and restarted

Image

my fast_whisper Ver is 1.1.1

I have ruled out the problem with GPU1 itself, and even when using pytorch official code to run it all, there were no errors

@endluo
Copy link
Author

endluo commented Feb 11, 2025

two GPUs are of the same ,is 4090. CUDA is 12.3

@endluo
Copy link
Author

endluo commented Feb 14, 2025

i find the reason,the reason is turbo model,when i choose other model,GPU1 can load whisper model and pytorch model

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant