Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MacOS 404 reply with Ollama #1034

Closed
vdddslep opened this issue Feb 2, 2025 · 5 comments
Closed

MacOS 404 reply with Ollama #1034

vdddslep opened this issue Feb 2, 2025 · 5 comments
Labels
help wanted Great issue for non-Block contributors

Comments

@vdddslep
Copy link

vdddslep commented Feb 2, 2025

Goose v1.0.4

Image

% cat ~/.config/goose/config.yaml
OLLAMA_HOST: http://localhost:11434/
GOOSE_PROVIDER: ollama
GOOSE_MODEL: qwen2.5

@vdddslep
Copy link
Author

vdddslep commented Feb 2, 2025

Image
same result

@salman1993
Copy link
Collaborator

salman1993 commented Feb 2, 2025

Can you confirm that your Ollama server is running? You might need to open another terminal window and run: ollama serve

then verify Ollama is able to serve requests:

  curl http://localhost:11434/v1/chat/completions \
    -H "Content-Type: application/json" \
    -d '{
        "model": "qwen2.5", # REPALCE WITH YOUR MODEL
        "messages": [
            {
                "role": "system",
                "content": "You are a helpful assistant."
            },
            {
                "role": "user",
                "content": "Hello!"
            }
        ]
    }'

if the curl requests work, then Goose should be able to use the Ollama model

@salman1993 salman1993 added the help wanted Great issue for non-Block contributors label Feb 2, 2025
@vdddslep
Copy link
Author

vdddslep commented Feb 2, 2025

Image

its running and it works, Im using it in open web UI

Image

Also as far as I remember --->> curl http://localhost:11434/v1/chat/completions ->> v1 its OpenAI-Like format

@salman1993
Copy link
Collaborator

that's expected - we're using Ollama's Open AI compatible endpoint.

if you're using the Goose GUI, can you try resetting the provider and trying to chat?
Image

i am not able to replicate the error you're seeing. goose configure succeeds for me.
Image

@vdddslep
Copy link
Author

vdddslep commented Feb 2, 2025

I've figured out!
I put the model name as qwen2.5:7b, and it let me save it, and now everything works!!
Before, when I tried to save the model name as qwen2.5:7b, it saved as qwen2.5 instead. I'm sure I tried this a few times

Any way thank you for your help

@vdddslep vdddslep closed this as completed Feb 2, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Great issue for non-Block contributors
Projects
None yet
Development

No branches or pull requests

2 participants