-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ARC A730M (PI_ERROR_BUILD_PROGRAM_FAILURE) #12765
Comments
This is a known issue, we will add a better error message for this case. You can useOneAPI device selector to use A730M only before your run ollama. Like |
I have already specified the GPU in the first run |
But your log shows you still have both igpu and A730M, if you set |
This is the second execution, only loading the oneAPI environment. Please refer to the code block above for the first execution |
An incomprehensible error occurred
|
We didn't meet this error before, maybe caused by your environment. How about |
Same error as level_zero: 0
|
I just try deepseek-r1:8b on A770, the model works fine. Could you uninstall your intel driver and oneapi, then follow https://github.com/intel/ipex-llm/blob/main/docs/mddocs/Quickstart/install_linux_gpu.md#install-gpu-driver to install the environment. |
But I am using Gentoo Linux, and for Windows, I have to disable the graphics card to use it properly. If I use the oneAPI to specify the GPU, there will also be an error. Currently, I am running the model on Windows |
Disable XE core graphics and keep A730m dedicated graphics to run normally on Windows |
On ubuntu, we can use |
Windows can also use this specified GPU, but the error is the same as mine, so I gave up and only disabled the XE core display on Windows to use it |
Follow the documentation 在Intel GPU上使用IPEX-LLM运行Ollama
Execute only one line
The text was updated successfully, but these errors were encountered: