-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
issue tying to install on WSL #1876
Comments
This is an out-of-memory problem, I'm facing the same. The build keep allocating more and more memory, and eventually crashes. For me, it crashes after allocating about 40Gb (!) mem |
try increasing your swap space |
Not wanting to be rude but if the compilation process of such a small piece of software eats up 40gb of ram and wanting even more, the issue is not with my settings, obviously |
I'm experiencing the same issue, where the process consumes 188GB of RAM, leading to a crash. The solution below resolved it for me: Solution: Compile with Ninja Support1. Install Ninja
pip install ninja
pip install llama-cpp-python -v --prefer-binary
|
no matter what i try, i keep getting this issue on WSL:
pip install --upgrade --no-cache-dir --force-reinstall git+https://github.com/abetlen/llama-cpp-python
Defaulting to user installation because normal site-packages is not writeable
Collecting git+https://github.com/abetlen/llama-cpp-python
Cloning https://github.com/abetlen/llama-cpp-python to /tmp/pip-req-build-7dtawo9w
Running command git clone --filter=blob:none --quiet https://github.com/abetlen/llama-cpp-python /tmp/pip-req-build-7dtawo9w
Resolved https://github.com/abetlen/llama-cpp-python to commit 2bc1d97
Running command git submodule update --init --recursive -q
Installing build dependencies ... done
Getting requirements to build wheel ... done
Installing backend dependencies ... done
Preparing metadata (pyproject.toml) ... done
Collecting typing-extensions>=4.5.0 (from llama_cpp_python==0.3.5)
Downloading typing_extensions-4.12.2-py3-none-any.whl.metadata (3.0 kB)
Collecting numpy>=1.20.0 (from llama_cpp_python==0.3.5)
Downloading numpy-2.2.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (62 kB)
Collecting diskcache>=5.6.1 (from llama_cpp_python==0.3.5)
Downloading diskcache-5.6.3-py3-none-any.whl.metadata (20 kB)
Collecting jinja2>=2.11.3 (from llama_cpp_python==0.3.5)
Downloading jinja2-3.1.4-py3-none-any.whl.metadata (2.6 kB)
Collecting MarkupSafe>=2.0 (from jinja2>=2.11.3->llama_cpp_python==0.3.5)
Downloading MarkupSafe-3.0.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (4.0 kB)
Downloading diskcache-5.6.3-py3-none-any.whl (45 kB)
Downloading jinja2-3.1.4-py3-none-any.whl (133 kB)
Downloading numpy-2.2.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (16.4 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 16.4/16.4 MB 83.9 MB/s eta 0:00:00
Downloading typing_extensions-4.12.2-py3-none-any.whl (37 kB)
Downloading MarkupSafe-3.0.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (20 kB)
Building wheels for collected packages: llama_cpp_python
Building wheel for llama_cpp_python (pyproject.toml) ... -Killed
The text was updated successfully, but these errors were encountered: