Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
## LLMstudio Version 1.0.0b1 ### What was done in this PR: - new libraries: llmstudio-core, llmstudio-tracker, llmstudio-proxy, llmstudio (former monolith) - **Modularization and Refactoring**: Shifted the structure towards modularization to enhance maintainability and scalability. Significant changes were made in the core components to make them more modular, including separating server components and organizing imports. - **LLM Provider Updates**: Implemented changes in the LLM provider instantiation. The provider now requires additional parameters such as `api_key` and accepts a structured `chat_request`. This restructuring also includes making the proxy and tracking optional for more flexible usage. - **Feature Enhancements**: - Added asynchronous chat support, both with and without streaming, and implemented synchronous methods. - Introduced `session_id` tracking for chat sessions and logging support for better traceability. - Incorporated dynamic API versioning and the ability to handle multiple API providers, including OpenAI, Azure, and VertexAI. - Added support for LangChain agents and tool calling with parallel execution. - Adapted VertexAI integration for both regular and tool-calling functions. - Provided support for legacy UTC compatibility to address compatibility with older Python versions. - **Automated Testing and Documentation**: - Enhanced test automation for development processes, including adding tests for modularized components. - Updated the documentation, added docstrings to LLM classes, and provided a tutorial on using `langgraph`. - **Formatting and Configuration**: - Applied consistent formatting, added `isort` and `flake8` configurations, and updated pre-commit hooks. - Updated `config.yaml` for easier configuration management and enhanced initialization for Azure-based clients. ### How it was tested: - Conducted automated tests across development environments to verify the correctness of modularized components and new functionality. - Validated the functionality of asynchronous and synchronous chat capabilities with various LLM providers, including OpenAI, Azure, and VertexAI. - Tested new `session_id` integration for accuracy in tracking sessions and seamless functionality with logging tools. - Verified compatibility across Python versions, particularly with UTC handling in legacy versions. - Reviewed server component separation and ensured compatibility with modular server deployments. ### Additional notes: - **Any breaking changes?** Yes, breaking changes were introduced in the LLM provider instantiation. The new instantiation now requires specifying parameters in the following format: ```python llm = LLM(provider=provider, api_key=api_key, **kwargs) chat_request = { "chat_input": "Hello, my name is Json", "model": model, "is_stream": False, "retries": 0, "parameters": { "temperature": 0, "max_tokens": 100, "response_format": {"type": "json_object"}, "functions": None, } } llm.achat(**chat_request) ``` - **Any new dependencies added?** New dependencies are introduced to support modularization, API versioning, and enhanced testing, though specific package details should be checked in `pyproject.toml`. - **Any performance improvements?** Optimized API calls with parallel execution for tool calls, streamlined server components, and minimized function calls within the LLM provider, which should contribute to overall performance improvements.
- Loading branch information