-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kindly add OpenVINO backend support #32
Comments
You can use OpenVINO with whisper.cpp (I personally found using CLBlast instead was a little faster on my weak Celeron n5095, though): Then you can use this for a Wyoming endpoint: https://github.com/ser/wyoming-whisper-api-client |
Thank you very much, I'll give it a try soon! |
faster-whisper uses CTranslate2 which doesn't have OpenVino support. |
For future reference, and anyone stumbling into here trying to get whisper to use their Intel GPU, I have created a working demo Dockerfile and Compose here: |
This is perfect timing as I just started looking at this yesterday. @monoamin have you run any comparison tests at all? Interestingly the page for faster-whisper pegs it as slightly faster with no acceleration compared to whispercpp with OpenVINO. I'd be very curious as to whether you see similar results. |
@MaximumFish I have not extensively tested this, but I might be able to give you some quick numbers. This is with whisper.cpp, first CPU-only, then GPU, with some random speech sample:
I currently don't have a faster-whisper container running but I'll see if I can set one up today to compare results. |
Thanks for testing it! Definitely interested to see the results vs faster-whisper. |
Will it be possible to add OpenVINO support for Intel-based processors? The repo by @zhuzilin here shows a speed improvement of nearly 50%, so users will be able to use larger models without sacrificing current performance. Thank you!
The text was updated successfully, but these errors were encountered: