-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Issues: intel/ipex-llm
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
When will support for the multimodal large model deepseek-ai/Janus-Pro-1B be available?
user issue
#12773
opened Feb 6, 2025 by
szzzh
Intel B580 -> not able to run Ollama serve on GPU after following guide
user issue
#12772
opened Feb 5, 2025 by
Mushtaq-BGA
[Windows-MTL-NPU]: OSError: [WinError -529697949] Windows Error 0xe06d7363
user issue
#12762
opened Jan 29, 2025 by
raj-ritu17
RuntimeError: Unable to run model on ipex-llm[cpp] using intel 1240p
user issue
#12751
opened Jan 25, 2025 by
1009058470
vpux-compiler error occured when using qwen2.5-7B in large content or prompt
#12736
opened Jan 22, 2025 by
dockerg
Using Tensor Parallel in the ipex-llm-serving-xpu Docker Image results in a crash.
multi-arc
user issue
#12733
opened Jan 22, 2025 by
HumerousGorgon
Does ipex-llm support for Intel® Movidius™ Vision Processing Unit (VPU)
#12717
opened Jan 17, 2025 by
dockerg
Error of running llama3.2-vision with the ipex-llm's built-in ollama.
#12707
opened Jan 15, 2025 by
1ngram433
the reference results are blank with deepseek model and our generate example code
#12696
opened Jan 10, 2025 by
K-Alex13
[ipex-llm[cpp]][ollama] low performance and gpu usage when running minicpm3-4B model
#12675
opened Jan 8, 2025 by
jianjungu
Previous Next
ProTip!
Follow long discussions with comments:>50.