-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error #15: Initializing libiomp5md.dll, but found libomp140.x86_64.dll already initialized. #967
Comments
Could someone post currently working setup for faster-whisper on PC? |
Hello, here is the setup that works for my PC (Windows 10): I use a virtual environment with the following packages: From my experience, the 'OMP: Error #15: Initializing libiomp5md.dll, but found libomp140.x86_64.dll already initialized' error or similar ones are usually caused by some packages which try to use the same or very similiar .dll-files (Windows appears to be worse at managing this than MacBooks). Unfortunately, the numpy package may cause this, so it is a very frequent issue. Some people reported that deleting and reinstalling numpy or updating it solved the issue for them. I mostly manage to avoid it by creating virtual environments in python and installing as few packages as possible. The packages above are those I use to run a small GUI for faster-whisper, so if you only want to run faster-whisper on the command line, using even fewer packages should work. I hope this helps. |
I got this problem by upgrade fast-whisper, import os
os.environ["KMP_DUPLICATE_LIB_OK"] = "TRUE" |
I'm hitting this as well on a Intel Core i7 Mac running macOS 13.7.2 (22H313). Setting the env, as @luguoyixiazi and the halting message suggests seems to work around the problem, with whatever slow downs and silent issues the notification warns us of. But, I don't have my application actually outputting a transcript quite yet, so I can't be sure. I'll update this comment either way, if I can get a successful transcript or I experience any further issues down the line. |
I had fastest-whisper working on MacBook M3, but when trying to run the same code on Windows laptop, there been problems.
OMP: Error #15: Initializing libiomp5md.dll, but found libomp140.x86_64.dll already initialized.
OMP: Hint This means that multiple copies of the OpenMP runtime have been linked into the program. That is dangerous, since it can degrade performance or cause incorrect results. The best thing to do is to ensure that only a single OpenMP runtime is linked into the process, e.g. by avoiding static linking of the OpenMP runtime in any library. As an unsafe, unsupported, undocumented workaround you can set the environment variable KMP_DUPLICATE_LIB_OK=TRUE to allow the program to continue to execute, but that may cause crashes or silently produce incorrect results. For more information, please see http://www.intel.com/software/products/support/.
When activated os.environ['KMP_DUPLICATE_LIB_OK']='True'
new error appeared:
[2024-08-18 20:34:06.149] [ctranslate2] [thread 7412] [warning] The compute type inferred from the saved model is float16, but the target device or backend do not support efficient float16 computation. The model weights have been automatically converted to use the float32 compute type instead.
Traceback (most recent call last):
File "C:\Users\USERNAME\PycharmProjects\voice3\main.py", line 69, in
for segment in segments:
File "C:\Users\USERNAME\PycharmProjects\voice3.venv\Lib\site-packages\faster_whisper\transcribe.py", line 510, in generate_segments
encoder_output = self.encode(segment)
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\USERNAME\PycharmProjects\voice3.venv\Lib\site-packages\faster_whisper\transcribe.py", line 769, in encode
return self.model.encode(features, to_cpu=to_cpu)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: parallel_for failed: cudaErrorNoKernelImageForDevice: no kernel image is available for execution on the device
CUDA current setup:
version: 2.4.0+cu121
available: True
zeros: tensor([0.], device='cuda:0')
count: 1
name: Quadro M2000M
aiohappyeyeballs 2.3.7
aiohttp 3.10.4
aiosignal 1.3.1
alembic 1.13.2
antlr4-python3-runtime 4.9.3
asteroid-filterbanks 0.4.0
attrs 24.2.0
audioread 3.0.1
av 11.0.0
certifi 2024.7.4
cffi 1.17.0
charset-normalizer 3.3.2
click 8.1.7
colorama 0.4.6
coloredlogs 15.0.1
colorlog 6.8.2
contourpy 1.2.1
ctranslate2 4.3.1
cuda-python 12.6.0
cycler 0.12.1
decorator 5.1.1
docopt 0.6.2
einops 0.8.0
faster-whisper 1.0.0
filelock 3.13.1
flatbuffers 24.3.25
fonttools 4.53.1
frozenlist 1.4.1
fsspec 2024.2.0
greenlet 3.0.3
huggingface-hub 0.24.5
humanfriendly 10.0
HyperPyYAML 1.2.2
idna 3.7
inquirerpy 0.3.4
Jinja2 3.1.3
joblib 1.4.2
julius 0.2.7
kiwisolver 1.4.5
lazy_loader 0.4
librosa 0.10.2.post1
lightning 2.4.0
lightning-utilities 0.11.6
llvmlite 0.43.0
Mako 1.3.5
markdown-it-py 3.0.0
MarkupSafe 2.1.5
matplotlib 3.9.2
mdurl 0.1.2
mpmath 1.3.0
msgpack 1.0.8
multidict 6.0.5
networkx 3.2.1
nltk 3.8.1
numba 0.60.0
numpy 1.26.0
omegaconf 2.3.0
onnxruntime 1.19.0
optuna 3.6.1
packaging 24.1
pandas 2.2.2
pfzy 0.3.4
pillow 10.2.0
pip 23.2.1
platformdirs 4.2.2
pooch 1.8.2
primePy 1.3
prompt_toolkit 3.0.47
protobuf 5.27.3
pyannote.audio 3.1.1
pyannote.core 5.0.0
pyannote.database 5.1.0
pyannote.metrics 3.2.1
pyannote.pipeline 3.0.1
pycparser 2.22
Pygments 2.18.0
pyparsing 3.1.2
pyreadline3 3.4.1
python-dateutil 2.9.0.post0
pytorch-lightning 2.4.0
pytorch-metric-learning 2.6.0
pytz 2024.1
pywin32 306
PyYAML 6.0.2
regex 2024.7.24
requests 2.32.3
rich 13.7.1
ruamel.yaml 0.18.6
ruamel.yaml.clib 0.2.8
safetensors 0.4.4
scikit-learn 1.5.1
scipy 1.14.0
semver 3.0.2
sentencepiece 0.2.0
setuptools 72.2.0
shellingham 1.5.4
six 1.16.0
sortedcontainers 2.4.0
soundfile 0.12.1
soxr 0.4.0
speechbrain 1.0.0
SQLAlchemy 2.0.32
sympy 1.12
tabulate 0.9.0
tensorboardX 2.6.2.2
threadpoolctl 3.5.0
tokenizers 0.15.2
torch 2.4.0+cu121
torch-audiomentations 0.11.1
torch-pitch-shift 1.2.4
torchaudio 2.4.0+cu121
torchmetrics 1.4.1
torchvision 0.19.0+cu121
tqdm 4.66.5
transformers 4.39.3
typer 0.12.4
typing_extensions 4.9.0
tzdata 2024.1
urllib3 2.2.2
wcwidth 0.2.13
yarl 1.9.4
The text was updated successfully, but these errors were encountered: