Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: o1 model in LibreChat UI does not work properly with vision inputs. #5174

Closed
1 task done
rohit901 opened this issue Jan 3, 2025 · 0 comments · Fixed by #5170
Closed
1 task done

[Bug]: o1 model in LibreChat UI does not work properly with vision inputs. #5174

rohit901 opened this issue Jan 3, 2025 · 0 comments · Fixed by #5170
Labels
🐛 bug Something isn't working

Comments

@rohit901
Copy link

rohit901 commented Jan 3, 2025

What happened?

I was using the o1 model to understand a technical concept, and my initial messages were just in plain text. In the middle I passed some screenshots along with my prompt to the o1 model in LibreChat UI, but it didn't work as intended and the model seems to be different too based on different color of the icon.
Refer to the below screenshot and the garbage response produced by the model:
Screen Shot 2025-01-03 at 6 05 51 PM

o1 API should support vision inputs so the above should not be happening. I wonder what is happening behind the scenes in LibreChat when we pass in attachments in our chat session?

long back I had noticed that whenever I used any anthropic model and used attachment/files, I would see usage of two different models. I mean if I had selected sonnet in the dropdown and use some attachment with the prompt like image,file, then there would be usage of both sonnet and haiku in my API dashboard. So that makes me question what exactly might be happening behind the scene when you use attachment with Librechat UI?

I started a brand new chat session with same prompt and it still gave me a garbage response. Refer the below screenshot.
Screen Shot 2025-01-03 at 6 21 22 PM

I discussed about this issue on discord server with @berry-13 as well, and he confirmed that gpt-4o model is being used in the backend instead of o1. Please refer the below screenshot.
gpt4o_bug_librechat

I hope this issue can be fixed and looking forward to more clarity on why this issue happened in the first place.

Steps to Reproduce

  1. Use o1 model in LibreChat in v0.7.6 and attach two images.
  2. Ask the model to reason or answer the query based on contents of the attached images.
  3. It will produce garbage output.

What browsers are you seeing the problem on?

No response

Relevant log output

No response

Screenshots

No response

Code of Conduct

  • I agree to follow this project's Code of Conduct
@rohit901 rohit901 added the 🐛 bug Something isn't working label Jan 3, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🐛 bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant