We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Highly quantized language models that can run locally are getting more and more popular with even Chrome shipping a Gemini Nano model in their latest canary builds. Models like Phi-3-mini already achieve impressive performance for being comparatively small and support cross-platform inference using a Rust library named candle.
candle
It would be cool if we could bundle such a model with D2, e.g. as a command and/or as a Conversator.
Conversator
The text was updated successfully, but these errors were encountered:
llama.cpp and llama.swift would be worth investigating, even though the latter might be primarily targeting Apple platforms.
llama.cpp
llama.swift
Sorry, something went wrong.
No branches or pull requests
Highly quantized language models that can run locally are getting more and more popular with even Chrome shipping a Gemini Nano model in their latest canary builds. Models like Phi-3-mini already achieve impressive performance for being comparatively small and support cross-platform inference using a Rust library named
candle
.It would be cool if we could bundle such a model with D2, e.g. as a command and/or as a
Conversator
.The text was updated successfully, but these errors were encountered: