-
Notifications
You must be signed in to change notification settings - Fork 86
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to configure API using Azure OpenAI? #179
Comments
Hi @peiyaoli , thanks for your question. OntoGPT does not currently have a way to interface with the Azure OpenAI service. It looks like that feature will be in the |
Many thanks |
Hi @peiyaoli @caufieldjh I believe we can help with this issue. I’m the maintainer of LiteLLM https://github.com/BerriAI/litellm TLDR: You can use LiteLLM in the following ways: With your own API KEY:This calls the provider API directly from litellm import completion
import os
## set ENV variables
os.environ["OPENAI_API_KEY"] = "your-key" #
os.environ["COHERE_API_KEY"] = "your-key" #
messages = [{ "content": "Hello, how are you?","role": "user"}]
# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)
# cohere call
response = completion(model="command-nightly", messages=messages) Using the LiteLLM Proxy with a LiteLLM Keythis is great if you don’t have access to claude but want to use the open source LiteLLM proxy to access claude from litellm import completion
import os
## set ENV variables
os.environ["OPENAI_API_KEY"] = "sk-litellm-5b46387675a944d2" # [OPTIONAL] replace with your openai key
os.environ["COHERE_API_KEY"] = "sk-litellm-5b46387675a944d2" # [OPTIONAL] replace with your cohere key
messages = [{ "content": "Hello, how are you?","role": "user"}]
# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)
# cohere call
response = completion(model="command-nightly", messages=messages) |
many thanks, Bro! |
Hi, @caufieldjh Just want to check if this feature has been updated or not. |
Hi @peiyaoli, looks like the change made to |
I think we should adopt My original plan was to use I'm also less of a fan of my original strategy as
@ishaan-jaff thanks for this great tool! |
This is how gpt4all is currently implemented - it uses llm-gpt4all. The way it's called will change a bit in #306 but it's still calling that module. I do agree that the |
Azure endpoints supported as of #373 |
Hi, we are using GPT provided by Azure. How should we configure the token for that? Many thanks
The text was updated successfully, but these errors were encountered: