-
Notifications
You must be signed in to change notification settings - Fork 617
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Generic Open AI API compatible provider. (Deepseek, Helicone, LiteLLM etc) #885
Comments
Second this, I'm trying to use Perplexity AI which has Open AI compatible API keys, but I'm not able to configure it |
you can set |
@willemsFEB @242816 would it make sense to have openai, but then a seperate "openai compatible" provider (seperate choices?) - the latter will ask for more config, thoughts? |
That makes sense, you may also want to add things like azure api version there as an optional parang |
+1 for openai like compatible provider in UI |
+1 for openai compatible API. I think it would make more sense to have separate configurations for openai and openai-compatible APIs. This way, you can have more control over the API version of the OpenAI-compatible provider, which may be necessary to work around edge cases of mismatching API versions. |
+1 for openai like compatible provider in UI 🙏 |
+1 for openai like compatible provider in UI. Our institution is starting to evaluate LiteLLM, which uses an openai compatible api with its own API Key. This will likely become more prevalent as businesses like LiteLLM spring up to offer cloud and on-prem ways to host opesource models. |
@michaelneale thanks for the suggestion, so I've set I'm able to get further, however, now goose is appending Any suggestions how I can bypass this? |
not at the moment - will need to enhance things to have that more generic thing (which I think is a good idea) - if you are able to - can clone and run |
Is it possible to get A prompt something like...
Or is it more complex than that? |
Yeah @242816 that should work as long as the host supports all the openai features including tools. Contribution welcome but we will get started on this soon if no one from this thread has tried it out yet. |
there seems to be a lot of interest in providing an Open-AI compatible provider 🙌🏼 if anyone is interested, this would be a great contribution. or else, we will hopefully get to this in a couple weeks and update here. it should be similar to these providers: |
Great, can't wait to use Goose with LM Studio. :) |
I definitely want to use Helicone to observe and iterate on goose's LLM interactions. I think this is required for that to happen, but if there's another way to change host and set an extra header, do let me know! |
Since Goose does not support LM-studio as an LLM provider I built an Ollama proxy to convert your queries to OpenAI. Its working on MLX models too. Check it out, hope it helps! |
fwiw - I added mlx-lm as a provider. it's barebones but it works. https://github.com/fblissjr/goose-mlx goose configure & session: mlx_lm.server log: |
Most inference providers support the Open AI API.
So this morning I wanted to use Helicone to track goose calls.
With Aider I would use this https://aider.chat/docs/llms/openai-compat.html
I tried to use the existing open ai provider https://github.com/block/goose/blob/main/crates/goose/src/providers/openai.rs
This didn't work as it's too tied into Open AI.
So a generic provider should allow me to set the HOST, MODEL and KEY to any provider I want.
Requirements
A new provider with the ability to set the following
OPENAI_API_BASE=
OPENAI_API_KEY=
OPENAI_API_MODEL=
Note the BASE needs to be everything up to the /v1 i.e. https://oai.helicione.ai/324324-234234-24324/v1
The text was updated successfully, but these errors were encountered: