Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Explore/azure oai vision #126

Merged
merged 5 commits into from
Sep 3, 2024
Merged

Conversation

diogoazevedo15
Copy link
Contributor

Summary

Updated the method input_to_string in provider.py to ensure compatibility with vision models.

input_to_string:

  1. Now appends text from messages containing images.
  2. Also adds the base64 string to the token count.

To be discussed: Should we include the base64 string in the token count?

diogoazevedo15 and others added 5 commits August 30, 2024 14:35
1. Update the llama parsing for Llama calls with functions, where the functions are not used to produce the response.
2. Remove useless chunk code from provider.py
Updated the method input_to_string to ensure compatibility with vision models.
@diogoazevedo15 diogoazevedo15 changed the base branch from main to develop September 2, 2024 17:10
@diogoazevedo15 diogoazevedo15 merged commit 66bc3ae into develop Sep 3, 2024
3 of 4 checks passed
@diogoazevedo15 diogoazevedo15 deleted the explore/azure-oai-vision branch September 3, 2024 12:09
@diogoncalves diogoncalves mentioned this pull request Sep 9, 2024
claudiolemos added a commit that referenced this pull request Sep 9, 2024
## LLMstudio Version 0.3.11

### What was done in this version:

- Updated the method input_to_string in provider.py to ensure
compatibility with vision models -- [PR
126](#126)
- Added events to the startup process of tracking, ui and engine. This
removes the race conditions we were experiencing repeatedly, also
removes the need to run start_server() as early as possible -- [PR
129](#129).
- Improved exception handling for invalid Azure endpoints -- [PR
129](#129).


### How it was tested:

- Ran projects with LLMStudio server dependencies

### Additional notes:

- Any breaking changes? 
    - No
- Any new dependencies added?
    - No
- Any performance improvements?
- Yes. Servers will be launched synchronously preventing parent PIDs to
call LLMStudio before being up.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant