Skip to content

Commit

Permalink
Migrating from llm-wrapper to allms. All code refactored, docs and RE…
Browse files Browse the repository at this point in the history
…ADME.md updated
  • Loading branch information
riccardo-alle committed Feb 27, 2024
1 parent 77a971d commit f2ea35d
Show file tree
Hide file tree
Showing 59 changed files with 170 additions and 146 deletions.
4 changes: 2 additions & 2 deletions .github/workflows/release.yml
Original file line number Diff line number Diff line change
Expand Up @@ -20,9 +20,9 @@ jobs:
run: make install-poetry
- name: Install dependencies
run: make install-env
- name: Build llm-wrapper package
- name: Build allms package
run: make build
- name: Publish llm-wrapper package to PyPI
- name: Publish allms package to PyPI
env:
TWINE_PASSWORD: ${{ secrets.PYPI_API_TOKEN }}
run: |
Expand Down
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ build::
poetry run python -m build --sdist --wheel .

linter::
poetry run pylint llm_wrapper --reports=no --output-format=colorized --fail-under=8.0
poetry run pylint allms --reports=no --output-format=colorized --fail-under=8.0

tests::
poetry run python -m pytest -s --verbose
Expand Down
36 changes: 18 additions & 18 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,16 +1,16 @@
# llm-wrapper
# allms

___
## About

llm-wrapper is a versatile and powerful library designed to streamline the process of querying Large Language Models
allms is a versatile and powerful library designed to streamline the process of querying Large Language Models
(LLMs) 🤖💬

Developed by the Allegro engineers, llm-wrapper is based on popular libraries like transformers, pydantic, and langchain. It takes care
Developed by the Allegro engineers, allms is based on popular libraries like transformers, pydantic, and langchain. It takes care
of the boring boiler-plate code you write around your LLM applications, quickly enabling you to prototype ideas, and eventually helping you to scale up
for production use-cases!

Among the llm-wrapper most notable features, you will find:
Among the allms most notable features, you will find:

* **😊 Simple and User-Friendly Interface**: The module offers an intuitive and easy-to-use interface, making it straightforward to work with the model.

Expand All @@ -28,7 +28,7 @@ ___

Full documentation available at **[llm-wrapper.allegro.tech](https://llm-wrapper.allegro.tech/)**

Get familiar with llm-wrapper 🚀: [introductory jupyter notebook](https://github.com/allegro/llm-wrapper/blob/main/examples/introduction.ipynb)
Get familiar with allms 🚀: [introductory jupyter notebook](https://github.com/allegro/allms/blob/main/examples/introduction.ipynb)

___

Expand All @@ -39,23 +39,23 @@ ___
Install the package via pip:

```
pip install llm-wrapper
pip install allms
```

### Basic Usage ⭐

Configure endpoint credentials and start querying the model with any prompt:

```python
from llm_wrapper.models import AzureOpenAIModel
from llm_wrapper.domain.configuration import AzureOpenAIConfiguration
from allms.models import AzureOpenAIModel
from allms.domain.configuration import AzureOpenAIConfiguration

configuration = AzureOpenAIConfiguration(
api_key="your-secret-api-key",
base_url="https://endpoint.openai.azure.com/",
api_version="2023-03-15-preview",
deployment="gpt-35-turbo",
model_name="gpt-3.5-turbo"
api_key="your-secret-api-key",
base_url="https://endpoint.openai.azure.com/",
api_version="2023-03-15-preview",
deployment="gpt-35-turbo",
model_name="gpt-3.5-turbo"
)

gpt_model = AzureOpenAIModel(config=configuration)
Expand Down Expand Up @@ -102,7 +102,7 @@ responses = model.generate(prompt=prompt, input_data=input_data)

### Forcing Structured Output Format

Through pydantic integration, in llm-wrapper you can pass an output dataclass and force the LLM to provide
Through pydantic integration, in allms you can pass an output dataclass and force the LLM to provide
you the response in a structured way.

```python
Expand Down Expand Up @@ -151,7 +151,7 @@ ___

We assume that you have python `3.10.*` installed on your machine.
You can set it up using [pyenv](https://github.com/pyenv/pyenv#installationbrew)
([How to install pyenv on MacOS](https://jordanthomasg.medium.com/python-development-on-macos-with-pyenv-2509c694a808)). To install llm-wrapper env locally:
([How to install pyenv on MacOS](https://jordanthomasg.medium.com/python-development-on-macos-with-pyenv-2509c694a808)). To install allms env locally:

* Activate your pyenv;
* Install Poetry via:
Expand All @@ -160,7 +160,7 @@ You can set it up using [pyenv](https://github.com/pyenv/pyenv#installationbrew)
make install-poetry
```

* Install llm-wrapper dependencies with the command:
* Install allms dependencies with the command:

```bash
make install-env
Expand Down Expand Up @@ -189,7 +189,7 @@ via the github action `.github/workflows/docs.yml`

### Make a new release

When a new version of llm-wrapper is ready to be released, do the following operations:
When a new version of allms is ready to be released, do the following operations:

1. **Merge to master** the dev branch in which the new version has been specified:
1. In this branch, `version` under `[tool.poetry]` section in `pyproject.toml` should be updated, e.g `0.1.0`;
Expand All @@ -207,5 +207,5 @@ When a new version of llm-wrapper is ready to be released, do the following oper
1. Go to _Releases__Draft a new release_;
2. Select the recently created tag in _Choose a tag_ window;
3. Copy/paste all the content present in the CHANGELOG under the version you are about to release;
4. Upload `llm_wrapper-<NEW-VERSION>.whl` and `llm_wrapper-<NEW-VERSION>.tar.gz` as assets;
4. Upload `allms-<NEW-VERSION>.whl` and `allms-<NEW-VERSION>.tar.gz` as assets;
5. Click `Publish release`.
File renamed without changes.
File renamed without changes.
Original file line number Diff line number Diff line change
Expand Up @@ -7,10 +7,10 @@
from langchain.chains.combine_documents.base import BaseCombineDocumentsChain
from langchain.schema import Document

from llm_wrapper.domain.enumerables import AggregationLogicForLongInputData, LanguageModelTask
from llm_wrapper.domain.input_data import InputData
from llm_wrapper.domain.prompt_dto import AggregateOutputClass, KeywordsOutputClass, SummaryOutputClass
from llm_wrapper.utils.long_text_processing_utils import split_text_to_max_size
from allms.domain.enumerables import AggregationLogicForLongInputData, LanguageModelTask
from allms.domain.input_data import InputData
from allms.domain.prompt_dto import AggregateOutputClass, KeywordsOutputClass, SummaryOutputClass
from allms.utils.long_text_processing_utils import split_text_to_max_size


class LongTextProcessingChain(BaseCombineDocumentsChain):
Expand Down
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

from pydantic import BaseModel

from llm_wrapper.domain.input_data import InputData
from allms.domain.input_data import InputData


class ResponseData(BaseModel):
Expand Down
File renamed without changes.
7 changes: 7 additions & 0 deletions allms/models/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
from allms.models.azure_llama2 import AzureLlama2Model
from allms.models.azure_mistral import AzureMistralModel
from allms.models.azure_openai import AzureOpenAIModel
from allms.models.vertexai_gemini import VertexAIGeminiModel
from allms.models.vertexai_palm import VertexAIPalmModel

__all__ = ["AzureOpenAIModel", "AzureLlama2Model", "AzureMistralModel", "VertexAIPalmModel", "VertexAIGeminiModel"]
24 changes: 12 additions & 12 deletions llm_wrapper/models/abstract.py → allms/models/abstract.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,21 +17,21 @@
from langchain.schema import OutputParserException
from pydantic import BaseModel

from llm_wrapper.chains.long_text_processing_chain import (
from allms.chains.long_text_processing_chain import (
LongTextProcessingChain,
load_long_text_processing_chain
)
from llm_wrapper.constants.input_data import IODataConstants
from llm_wrapper.constants.prompt import PromptConstants
from llm_wrapper.defaults.general_defaults import GeneralDefaults
from llm_wrapper.domain.enumerables import AggregationLogicForLongInputData, LanguageModelTask

from llm_wrapper.defaults.long_text_chain import LongTextChainDefaults
from llm_wrapper.domain.input_data import InputData
from llm_wrapper.domain.prompt_dto import SummaryOutputClass, KeywordsOutputClass
from llm_wrapper.domain.response import ResponseData
import llm_wrapper.exceptions.validation_input_data_exceptions as input_exception_message
from llm_wrapper.utils.long_text_processing_utils import get_max_allowed_number_of_tokens
from allms.constants.input_data import IODataConstants
from allms.constants.prompt import PromptConstants
from allms.defaults.general_defaults import GeneralDefaults
from allms.domain.enumerables import AggregationLogicForLongInputData, LanguageModelTask

from allms.defaults.long_text_chain import LongTextChainDefaults
from allms.domain.input_data import InputData
from allms.domain.prompt_dto import SummaryOutputClass, KeywordsOutputClass
from allms.domain.response import ResponseData
import allms.exceptions.validation_input_data_exceptions as input_exception_message
from allms.utils.long_text_processing_utils import get_max_allowed_number_of_tokens

logger = logging.getLogger(__name__)

Expand Down
File renamed without changes.
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
import typing
from asyncio import AbstractEventLoop

from llm_wrapper.defaults.azure_defaults import AzureLlama2Defaults
from llm_wrapper.defaults.general_defaults import GeneralDefaults
from llm_wrapper.domain.configuration import AzureSelfDeployedConfiguration
from llm_wrapper.models.abstract import AbstractModel
from llm_wrapper.models.azure_base import AzureMLOnlineEndpointAsync, ChatModelContentFormatter
from allms.defaults.azure_defaults import AzureLlama2Defaults
from allms.defaults.general_defaults import GeneralDefaults
from allms.domain.configuration import AzureSelfDeployedConfiguration
from allms.models.abstract import AbstractModel
from allms.models.azure_base import AzureMLOnlineEndpointAsync, ChatModelContentFormatter


class AzureLlama2Model(AbstractModel):
Expand Down
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
import typing
from asyncio import AbstractEventLoop

from llm_wrapper.defaults.azure_defaults import AzureMistralAIDefaults
from llm_wrapper.defaults.general_defaults import GeneralDefaults
from llm_wrapper.domain.configuration import AzureSelfDeployedConfiguration
from llm_wrapper.models.abstract import AbstractModel
from llm_wrapper.models.azure_base import AzureMLOnlineEndpointAsync, ChatModelContentFormatter
from allms.defaults.azure_defaults import AzureMistralAIDefaults
from allms.defaults.general_defaults import GeneralDefaults
from allms.domain.configuration import AzureSelfDeployedConfiguration
from allms.models.abstract import AbstractModel
from allms.models.azure_base import AzureMLOnlineEndpointAsync, ChatModelContentFormatter


class AzureMistralModel(AbstractModel):
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,10 +3,10 @@

from langchain.chat_models import AzureChatOpenAI

from llm_wrapper.defaults.azure_defaults import AzureGptTurboDefaults
from llm_wrapper.defaults.general_defaults import GeneralDefaults
from llm_wrapper.domain.configuration import AzureOpenAIConfiguration
from llm_wrapper.models.abstract import AbstractModel
from allms.defaults.azure_defaults import AzureGptTurboDefaults
from allms.defaults.general_defaults import GeneralDefaults
from allms.domain.configuration import AzureOpenAIConfiguration
from allms.models.abstract import AbstractModel


class AzureOpenAIModel(AbstractModel):
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
from langchain_core.outputs import LLMResult, Generation
from pydash import chain

from llm_wrapper.constants.vertex_ai import VertexModelConstants
from allms.constants.vertex_ai import VertexModelConstants

class CustomVertexAI(VertexAI):
async def _agenerate(
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,11 @@
from langchain_community.llms.vertexai import VertexAI
from typing import Optional

from llm_wrapper.defaults.general_defaults import GeneralDefaults
from llm_wrapper.defaults.vertex_ai import GeminiModelDefaults
from llm_wrapper.domain.configuration import VertexAIConfiguration
from llm_wrapper.models.vertexai_base import CustomVertexAI
from llm_wrapper.models.abstract import AbstractModel
from allms.defaults.general_defaults import GeneralDefaults
from allms.defaults.vertex_ai import GeminiModelDefaults
from allms.domain.configuration import VertexAIConfiguration
from allms.models.vertexai_base import CustomVertexAI
from allms.models.abstract import AbstractModel


class VertexAIGeminiModel(AbstractModel):
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,11 @@
from langchain_community.llms.vertexai import VertexAI
from typing import Optional

from llm_wrapper.defaults.general_defaults import GeneralDefaults
from llm_wrapper.defaults.vertex_ai import PalmModelDefaults
from llm_wrapper.domain.configuration import VertexAIConfiguration
from llm_wrapper.models.vertexai_base import CustomVertexAI
from llm_wrapper.models.abstract import AbstractModel
from allms.defaults.general_defaults import GeneralDefaults
from allms.defaults.vertex_ai import PalmModelDefaults
from allms.domain.configuration import VertexAIConfiguration
from allms.models.vertexai_base import CustomVertexAI
from allms.models.abstract import AbstractModel


class VertexAIPalmModel(AbstractModel):
Expand Down
File renamed without changes.
4 changes: 2 additions & 2 deletions llm_wrapper/utils/io_utils.py → allms/utils/io_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,8 @@
import fsspec
import pandas as pd

from llm_wrapper.constants.input_data import IODataConstants
from llm_wrapper.domain.input_data import InputData
from allms.constants.input_data import IODataConstants
from allms.domain.input_data import InputData

logger = logging.getLogger(__name__)

Expand Down
File renamed without changes.
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
from langchain.base_language import BaseLanguageModel
from langchain.schema import Document

from llm_wrapper.defaults.long_text_chain import LongTextChainDefaults
from allms.defaults.long_text_chain import LongTextChainDefaults


def truncate_text_to_max_size(
Expand Down
4 changes: 2 additions & 2 deletions docs/api/input_output_dataclasses.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
## `class llm_wrapper.domain.input_data.InputData` dataclass
## `class allms.domain.input_data.InputData` dataclass
```python
@dataclass
class InputData:
Expand All @@ -12,7 +12,7 @@ class InputData:
- `id` (`str`): Unique identifier. Requests are done in an async mode, so the order of the responses is not the same
as the order of the input data, so this field can be used to identify them.

## `class llm_wrapper.domain.response.ResponseData` dataclass
## `class allms.domain.response.ResponseData` dataclass
```python
@dataclass
class ResponseData:
Expand Down
8 changes: 5 additions & 3 deletions docs/api/models/azure_llama2_model.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
## `class llm_wrapper.models.AzureLlama2Model` API
## `class allms.models.AzureLlama2Model` API
### Methods
```python
__init__(
Expand Down Expand Up @@ -48,6 +48,7 @@ is not provided, the length of this list is equal 1, and the first element is th

---

## `class allms.domain.configuration.AzureSelfDeployedConfiguration` API
```python
AzureSelfDeployedConfiguration(
api_key: str,
Expand All @@ -63,9 +64,10 @@ AzureSelfDeployedConfiguration(
---

### Example usage

```python
from llm_wrapper.models import AzureLlama2Model
from llm_wrapper.domain.configuration import AzureSelfDeployedConfiguration
from allms.models import AzureLlama2Model
from allms.domain.configuration import AzureSelfDeployedConfiguration

configuration = AzureSelfDeployedConfiguration(
api_key="<AZURE_API_KEY>",
Expand Down
8 changes: 5 additions & 3 deletions docs/api/models/azure_mistral_model.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
## `class llm_wrapper.models.AzureMistralModel` API
## `class allms.models.AzureMistralModel` API
### Methods
```python
__init__(
Expand Down Expand Up @@ -48,6 +48,7 @@ is not provided, the length of this list is equal 1, and the first element is th

---

## `class allms.domain.configuration.AzureSelfDeployedConfiguration` API
```python
AzureSelfDeployedConfiguration(
api_key: str,
Expand All @@ -63,9 +64,10 @@ AzureSelfDeployedConfiguration(
---

### Example usage

```python
from llm_wrapper.models import AzureMistralModel
from llm_wrapper.domain.configuration import AzureSelfDeployedConfiguration
from allms.models import AzureMistralModel
from allms.domain.configuration import AzureSelfDeployedConfiguration

configuration = AzureSelfDeployedConfiguration(
api_key="<AZURE_API_KEY>",
Expand Down
Loading

0 comments on commit f2ea35d

Please sign in to comment.