Skip to content

How to build llamafile.exe with Intel OneAPI (MKL) compilers? #430

Answered by jart
llm-finetune asked this question in Q&A
Discussion options

You must be logged in to vote

llamafile should already give you very good performance, for the reasons explained in https://justine.lol/matmul/. Basically, you get BLAS-like performance out of the box. If you still want to use MKL, then it's recommended you use https://github.com/ggerganov/llama.cpp/ and follow the instructions at https://github.com/ggerganov/llama.cpp/?tab=readme-ov-file#intel-onemkl

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by llm-finetune
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants