-
Notifications
You must be signed in to change notification settings - Fork 616
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MacOS 404 reply with Ollama #1034
Comments
Can you confirm that your Ollama server is running? You might need to open another terminal window and run: then verify Ollama is able to serve requests: curl http://localhost:11434/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "qwen2.5", # REPALCE WITH YOUR MODEL
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Hello!"
}
]
}' if the curl requests work, then Goose should be able to use the Ollama model |
its running and it works, Im using it in open web UI Also as far as I remember --->> curl http://localhost:11434/v1/chat/completions ->> v1 its OpenAI-Like format |
I've figured out! Any way thank you for your help |
Goose v1.0.4
% cat ~/.config/goose/config.yaml
OLLAMA_HOST: http://localhost:11434/
GOOSE_PROVIDER: ollama
GOOSE_MODEL: qwen2.5
The text was updated successfully, but these errors were encountered: