Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automate upstream llama.cpp sync #1910

Open
bgs4free opened this issue Jan 26, 2025 · 4 comments
Open

Automate upstream llama.cpp sync #1910

bgs4free opened this issue Jan 26, 2025 · 4 comments

Comments

@bgs4free
Copy link

bgs4free commented Jan 26, 2025

I'm trying to build with a newer upstream version of llama.cpp. Syncing the changes from upstream llama.h into llama_cpp.py is quite a toil. Has anyone (successfully) attempted to use LLM assistance to make it less painful?

@m-bappe-therman7777
Copy link

If you are able to build, could you share the llama_cpp.py (updated) and the version of llama.cpp you're using? Thanks in advance!

@bgs4free
Copy link
Author

If you are able to build, could you share the llama_cpp.py (updated) and the version of llama.cpp you're using? Thanks in advance!

Unfortunately I'm not. Currently attempting to get afa8a9e of the upstream project running. There are lots of breaking changes I'd have to understand first and which will affect more than just llama_cpp.py. Some functions seem gone and I'm not sure how to replace them, if at all.

I have no prior experience with this project nor llama.cpp. Just gave it a shot because I got impatient waiting for the deepseek tokenizer.

@JamePeng
Copy link

Maybe you can try my repaired version. #1901

@bgs4free
Copy link
Author

Maybe you can try my repaired version. #1901

Yes, can confirm this works. Can use Deepseek R1 distills now. Great work!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants