You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
硬件环境
cpu:Intel i5-1240p / AMD
gpu:Intel Iris Xe / AMD Radeon
内存:16GB DDR4
os:Windows 11
重现步骤
I read this doc try to run ollama on my machiche
and also set this
Ollama version
ollama version is 0.5.1-ipexllm-20250123 Warning: client version is 0.5.7
but when i try to set this value in (doc)[https://github.com/intel/ipex-llm/blob/main/docs/mddocs/Quickstart/ollama_quickstart.md#8-save-gpu-memory-by-specify-ollama_num_parallel1]
set OLLAMA_NUM_PARALLEL=1 that will can run the model, but when i try to say to it's, it will broken down
Hi @1009058470 , I am not able to reproduce your issue. And we have released a new version of ipex-llm ollama, you may install it in a new conda env via pip install --pre --upgrade ipex-llm[cpp] to see if it works.
Hi @1009058470 , I am not able to reproduce your issue. And we have released a new version of ipex-llm ollama, you may install it in a new conda env via pip install --pre --upgrade ipex-llm[cpp] to see if it works.
emmm I have run that ,but seem it do not run on gpu but only cpu
硬件环境
cpu:Intel i5-1240p / AMD
gpu:Intel Iris Xe / AMD Radeon
内存:16GB DDR4
os:Windows 11
重现步骤
I read this doc try to run ollama on my machiche
and also set this
I also try deepseek-r1:1.5b and llama3.2:1b
then error showed
debug.txt
OS
Windows
GPU
![Image](https://private-user-images.githubusercontent.com/30902531/406633163-43b90540-860b-4e66-8393-f636a106ad9b.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MzkyNjEzNjksIm5iZiI6MTczOTI2MTA2OSwicGF0aCI6Ii8zMDkwMjUzMS80MDY2MzMxNjMtNDNiOTA1NDAtODYwYi00ZTY2LTgzOTMtZjYzNmExMDZhZDliLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTAyMTElMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwMjExVDA4MDQyOVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTc1ZWJjZWU5YmQyMDk5MWQxYWJiYjg0NDE1MGU1NjBhN2FlMmI4NWFmYTMxN2FkMzdjZWUzMWE4ZTZmNDQ4NDkmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.VzdfcFyaahEFSlO6G7t-F58-POI3ehvckPVTCLAX3FU)
Intel
CPU
Intel
Ollama version
ollama version is 0.5.1-ipexllm-20250123 Warning: client version is 0.5.7
but when i try to set this value in (doc)[https://github.com/intel/ipex-llm/blob/main/docs/mddocs/Quickstart/ollama_quickstart.md#8-save-gpu-memory-by-specify-ollama_num_parallel1]
set OLLAMA_NUM_PARALLEL=1 that will can run the model, but when i try to say to it's, it will broken down
try_to_say_to_model_debug.txt
The text was updated successfully, but these errors were encountered: