-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Langgraph examples #165
Closed
gabrielrfg
wants to merge
15
commits into
feat/modularization_shift_and_lift
from
feat/langgraph-example
Closed
Langgraph examples #165
gabrielrfg
wants to merge
15
commits into
feat/modularization_shift_and_lift
from
feat/langgraph-example
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
## LLMstudio Version 0.3.11 ### What was done in this version: - Updated the method input_to_string in provider.py to ensure compatibility with vision models -- [PR 126](#126) - Added events to the startup process of tracking, ui and engine. This removes the race conditions we were experiencing repeatedly, also removes the need to run start_server() as early as possible -- [PR 129](#129). - Improved exception handling for invalid Azure endpoints -- [PR 129](#129). ### How it was tested: - Ran projects with LLMStudio server dependencies ### Additional notes: - Any breaking changes? - No - Any new dependencies added? - No - Any performance improvements? - Yes. Servers will be launched synchronously preventing parent PIDs to call LLMStudio before being up.
* Add Docker build and push to workflows Updated upload-pypi-dev and upload-pypi to build and push a docker image with the most recent version of LLMstudio (including dev releases) to tensoropsai dockerhub. * Fix some lint issues
* Add LLM docs * Add how to guides 1. Add how-to deploy-on-gke and build-a-tool-agent.
## LLMstudio Version 0.3.12 ### What was done in this PR: - Updated llmstudio version on `pyproject.toml` from 0.3.11 to 0.3.12
1. Change path of images in the guide from how-to/deploy-on-gke/images/ to images, since images are stored here https://mintlify.s3-us-west-1.amazonaws.com/tensorops/how-to/deploy-on-gke/images/
## LLMstudio Version 0.3.12 ### What was done in this PR: - Fixed image pathing for `how-to` `deploy-on-gke` guide. - Fixed lettering for the title of `deploy-on-gke` guide.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@gabrielrfg cherry picked to the right branch
picked the notebook to version 1.0.0 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
LLMstudio Version 1.0.0
What was done in this PR:
Added Langgraph example notebook
How it was tested:
Ran the notebook and did not see any red text :)
Additional notes:
Was necessary to create an auxiliary function to pretty print the results of a langgraph agent because for some reason if printing while messages are being generated langgraph misses printing parallel tool calls (not related to llmstudio)