Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: Unable to run model on ipex-llm[cpp] using intel 1240p #12751

Open
1009058470 opened this issue Jan 25, 2025 · 5 comments
Open
Assignees

Comments

@1009058470
Copy link

1009058470 commented Jan 25, 2025

硬件环境
cpu:Intel i5-1240p / AMD
gpu:Intel Iris Xe / AMD Radeon
内存:16GB DDR4
os:Windows 11
重现步骤
I read this doc try to run ollama on my machiche
and also set this

setx OLLAMA_DEBUG 1
ollama serve 2 > debug.log
ollama run deepseek-r1:7b

I also try deepseek-r1:1.5b and llama3.2:1b

then error showed
debug.txt
OS
Windows

GPU
Intel
Image

CPU
Intel

Ollama version
ollama version is 0.5.1-ipexllm-20250123 Warning: client version is 0.5.7

but when i try to set this value in (doc)[https://github.com/intel/ipex-llm/blob/main/docs/mddocs/Quickstart/ollama_quickstart.md#8-save-gpu-memory-by-specify-ollama_num_parallel1]
set OLLAMA_NUM_PARALLEL=1 that will can run the model, but when i try to say to it's, it will broken down

try_to_say_to_model_debug.txt

@sgwhat sgwhat self-assigned this Feb 5, 2025
@sgwhat
Copy link
Contributor

sgwhat commented Feb 5, 2025

Hi @1009058470, how did you pull deepseek-r1:7b and llama3.2:1b? We recommend that you download the model by using the ollama pull command.

@1009058470
Copy link
Author

Hi @1009058470, how did you pull deepseek-r1:7b and llama3.2:1b? We recommend that you download the model by using the ollama pull command.

ollama run deepseek-r1:7b
and then talk with it in cmd

@sgwhat
Copy link
Contributor

sgwhat commented Feb 8, 2025

Hi @1009058470 , I am not able to reproduce your issue. And we have released a new version of ipex-llm ollama, you may install it in a new conda env via pip install --pre --upgrade ipex-llm[cpp] to see if it works.

@1009058470
Copy link
Author

1009058470 commented Feb 8, 2025

Hi @1009058470 , I am not able to reproduce your issue. And we have released a new version of ipex-llm ollama, you may install it in a new conda env via pip install --pre --upgrade ipex-llm[cpp] to see if it works.

emmm I have run that ,but seem it do not run on gpu but only cpu

Image

@sgwhat
Copy link
Contributor

sgwhat commented Feb 8, 2025

Could you pls provide the detailed ollama server log?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants