Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

0.3.12 #140

Closed
wants to merge 23 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
23 commits
Select commit Hold shift + click to select a range
cd075fd
[fix] bump prerelease version in pyproject.toml
actions-user Oct 10, 2024
272794e
[fix] bump prerelease version in pyproject.toml
actions-user Oct 10, 2024
e8f8444
[fix] bump prerelease version in pyproject.toml
actions-user Oct 10, 2024
ec89d7d
Fix docker image (#138)
diogoazevedo15 Oct 10, 2024
111c54f
[fix] bump prerelease version in pyproject.toml
actions-user Oct 10, 2024
a555c40
Fix docker image (#139)
diogoazevedo15 Oct 10, 2024
2f0f482
[fix] bump prerelease version in pyproject.toml
actions-user Oct 10, 2024
f06f595
Add llmstudio docs (#136)
diogoazevedo15 Oct 15, 2024
57ea379
[fix] bump prerelease version in pyproject.toml
actions-user Oct 15, 2024
54c0926
[fix] bump prerelease version in pyproject.toml
actions-user Oct 15, 2024
7ffbf00
[fix] bump prerelease version in pyproject.toml
actions-user Oct 15, 2024
59a0bf3
Feat/add docker build and push to workflows (#142)
diogoazevedo15 Oct 15, 2024
7323ea3
[fix] bump prerelease version in pyproject.toml
actions-user Oct 15, 2024
78e48ae
Update upload-pypi-dev.yml
diogoazevedo15 Oct 15, 2024
0193fb3
[fix] bump prerelease version in pyproject.toml
actions-user Oct 15, 2024
f6a9c28
Update upload-pypi-dev.yml
diogoazevedo15 Oct 15, 2024
1f1a71f
Merge branch 'develop' of https://github.com/TensorOpsAI/LLMstudio in…
diogoazevedo15 Oct 15, 2024
d1d7276
[fix] bump prerelease version in pyproject.toml
actions-user Oct 15, 2024
620c144
Update deploy-on-google-cloud-platform.mdx
diogoazevedo15 Oct 15, 2024
d7e47c7
Merge branch 'develop' of https://github.com/TensorOpsAI/LLMstudio in…
diogoazevedo15 Oct 15, 2024
367babe
Fix lint issues
diogoazevedo15 Oct 15, 2024
a52cb6d
Update upload-pypi.yml (#144)
diogoazevedo15 Oct 15, 2024
18e591e
Update azure.py
diogoazevedo15 Oct 16, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
47 changes: 46 additions & 1 deletion .github/workflows/upload-pypi-dev.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
name: Upload Python package to PyPI as dev pre-release
name: Upload Python package to PyPI as dev release, build and push Docker image to hub.

on:
workflow_dispatch:
Expand Down Expand Up @@ -39,3 +39,48 @@ jobs:
git add pyproject.toml
git commit -m "[fix] bump prerelease version in pyproject.toml"
git push

# Wait for PyPI to update
- name: Wait for PyPI to update
run: |
VERSION=$(poetry version --short)
echo "Checking for llmstudio==$VERSION on PyPI..."
for i in {1..10}; do
if python -m pip install llmstudio==${VERSION} --dry-run >/dev/null 2>&1; then
echo "Package llmstudio==${VERSION} is available on PyPI."
break
else
echo "Package llmstudio==${VERSION} not available yet. Waiting 15 seconds..."
sleep 15
fi
if [ $i -eq 10 ]; then
echo "Package did not become available in time."
exit 1
fi
done

# Docker build and push section
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2

- name: Log in to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}

- name: Extract version for tagging Docker image
id: get_version
run: |
echo "VERSION=$(poetry version --short)" >> $GITHUB_ENV

- name: Build and tag Docker image
run: |
docker build \
--build-arg LLMSTUDIO_VERSION=${{ env.VERSION }} \
-t tensoropsai/llmstudio:${{ env.VERSION }} \
.

- name: Push Docker image to Docker Hub
run: |
docker push tensoropsai/llmstudio:${{ env.VERSION }}
56 changes: 55 additions & 1 deletion .github/workflows/upload-pypi.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
name: Upload Python package to PyPI
name: Upload Python package to PyPI and build/push Docker images

on:
push:
Expand All @@ -11,23 +11,77 @@ jobs:
deploy:
runs-on: ubuntu-latest
steps:
# Checkout the code
- name: Checkout code
uses: actions/checkout@v2

# Set up Python environment
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: "3.x"

# Install Poetry
- name: Install Poetry
run: |
curl -sSL https://install.python-poetry.org | python3 -

# Configure Poetry with PyPI token
- name: Configure Poetry
run: |
poetry config pypi-token.pypi ${{ secrets.PYPI_API_TOKEN }}

# Build and publish package to PyPI
- name: Build and publish to PyPI
run: |
poetry build
poetry publish

# Extract the new version number from pyproject.toml
- name: Extract version for tagging Docker image
run: |
echo "VERSION=$(poetry version --short)" >> $GITHUB_ENV

# Wait for the package to become available on PyPI
- name: Wait for PyPI to update
run: |
echo "Checking for llmstudio==${{ env.VERSION }} on PyPI..."
for i in {1..10}; do
if python -m pip install llmstudio==${{ env.VERSION }} --dry-run >/dev/null 2>&1; then
echo "Package llmstudio==${{ env.VERSION }} is available on PyPI."
break
else
echo "Package llmstudio==${{ env.VERSION }} not available yet. Waiting 15 seconds..."
sleep 15
fi
if [ $i -eq 10 ]; then
echo "Package did not become available in time."
exit 1
fi
done

# Set up Docker Buildx
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2

# Log in to Docker Hub
- name: Log in to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}

# Build and tag Docker images with both :latest and :[NEW_VERSION]
- name: Build and tag Docker images
run: |
docker build \
--build-arg LLMSTUDIO_VERSION=${{ env.VERSION }} \
-t tensoropsai/llmstudio:latest \
-t tensoropsai/llmstudio:${{ env.VERSION }} \
.

# Push both Docker images to Docker Hub
- name: Push Docker images to Docker Hub
run: |
docker push tensoropsai/llmstudio:${{ env.VERSION }}
docker push tensoropsai/llmstudio:latest
17 changes: 17 additions & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
# docker/Dockerfile

FROM python:3.11-slim
ENV PYTHONUNBUFFERED=1

# Install tools
RUN apt-get clean && apt-get update

# Install llmstudio
ARG LLMSTUDIO_VERSION
RUN pip install llmstudio==${LLMSTUDIO_VERSION}
RUN pip install psycopg2-binary

# Expose Ports
EXPOSE 8001 8002

CMD ["llmstudio", "server"]
83 changes: 83 additions & 0 deletions docs/how-to/build-a-tool-agent.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
This guide outlines how to build a tool calling agent using Langchain + LLMstudio.

## 1. Set up your tools
Start by defining the tools your agent is going to have access to.
```python
from langchain.tools import tool

@tool
def buy_ticket(destination: str):
"""Use this to buy a ticket"""
return "Bought ticket number 270924"

@tool
def get_departure(ticket_number: str):
"""Use this to fetch the departure time of a train"""
return "8:25 AM"
```

## 2. Setup your .env
Create a `.env` file on the root of your project with the the credentials for the providers you want to use.

<Tabs>
<Tab title="OpenAI">
```
OPENAI_API_KEY="YOUR_API_KEY"
```
</Tab>
<Tab title="VertexAI">
```
GOOGLE_API_KEY="YOUR_API_KEY"
```
</Tab>
<Tab title="Azure">
```
AZURE_BASE_URL="YOUR_MODEL_ENDPOINT"
AZURE_API_KEY="YOUR_API_KEY"
```
</Tab>
</Tabs>

## 3. Set up your model using LLMstudio
Use LLMstudio to choose the provider and model you want to use.
<Tabs>
<Tab title="OpenAI">
```python
model = ChatLLMstudio(model_id='openai/gpt-4o')
```
</Tab>
<Tab title="VertexAI">
```python
model = ChatLLMstudio(model_id='vertexai/gemini-1.5-flash')
```
</Tab>
<Tab title="Azure">
```python
model = ChatLLMstudio(model_id='azure/Meta-Llama-3.1-70B-Instruct')
```
</Tab>
</Tabs>

## 4. Build the agent
Set up your agent and agent executor using Langchain.

```python
from langchain import hub
from langchain.agents import AgentExecutor, create_openai_tools_agent

prompt = hub.pull("hwchase17/openai-tools-agent")
agent = create_openai_tools_agent(model, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools)

input = "Can you buy me a ticket to madrid?"

# Using with chat history
agent_executor.invoke(
{
"input": input,
}
)
```



157 changes: 157 additions & 0 deletions docs/how-to/deploy-on-gcp/deploy-on-google-cloud-platform.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,157 @@
Learn how to deploy LLMstudio as a containerized application on Google Kubernetes Engine and make calls from a local repository.


## Prerequisites
To follow this guide you need to have the following set-up:

- A **project** on google cloud platform.
- **Kubernetes Engine** API enabled on your project.
- **Kubernetes Engine Admin** role for the user performing the guide.

## Deploy LLMstudio

This example demonstrates a public deployment. For a private service accessible only within your enterprise infrastructure, deploy it within your own Virtual Private Cloud (VPC).
<Steps>
<Step title="Navigate to Kubernetes Engine">
Begin by navigating to the Kubernetes Engine page.
</Step>
<Step title="Select Deploy">
Go to **Workloads** and **Create a new Deployment**.
<Frame>
<img src="how-to/deploy-on-gcp/step-2.png" />
</Frame>
</Step>
<Step title="Name Your Deployment">
Rename your project. We will call the one in this guide **llmstudio-on-gcp**.
<Frame>
<img src="how-to/deploy-on-gcp/step-3.png" />
</Frame>
</Step>
<Step title="Select Your Cluster">
Choose between **creating a new cluster** or **using an existing cluster**.
For this guide, we will create a new cluster and use the default region.
<Frame>
<img src="how-to/deploy-on-gcp/step-4.png" />
</Frame>
</Step>
<Step title="Proceed to Container Details">
Once done done with the **Deployment configuration**, proceed to **Container details**.
</Step>
<Step title="Set Image Path">
In the new container section, select **Existing container image**.


Copy the path to LLMstudio's image available on Docker Hub.
```bash Image Path
tensoropsai/llmstudio:latest
```
Set it as the **Image path** to your container.
<Frame>
<img src="how-to/deploy-on-gcp/step-6.png" />
</Frame>
</Step>
<Step title="Set Environment Variables">
Configure the following mandatory environment variables:
| Environment Variable | Value |
|----------------------------|-----------|
| `LLMSTUDIO_ENGINE_HOST` | 0.0.0.0 |
| `LLMSTUDIO_ENGINE_PORT` | 8001 |
| `LLMSTUDIO_TRACKING_HOST` | 0.0.0.0 |
| `LLMSTUDIO_TRACKING_PORT` | 8002 |

Additionally, set the `GOOGLE_API_KEY` environment variable to enable calls to Google's Gemini models.
<Tip>Refer to **SDK/LLM/Providers** for instructions on setting up other providers.</Tip>

<Frame>
<img src="how-to/deploy-on-gcp/step-7.png" />
</Frame>

</Step>
<Step title="Proceed to Expose (Optional)">
After configuring your container, proceed to **Expose (Optional)**.
</Step>
<Step title="Expose Ports">
Select **Expose deployment as a new service** and leave the first item as is.

<Frame>
<img src="how-to/deploy-on-gcp/step-9-1.png" />
</Frame>

Add two other items, and expose the ports defined in the **Set Environment Variables** step.

<Frame>
<img src="how-to/deploy-on-gcp/step-9-2.png" />
</Frame>
</Step>
<Step title="Deploy">
After setting up and exposing the ports, press **Deploy**.
<Check>You have successfully deployed **LLMstudio on Google Cloud Platform**!</Check>
</Step>

</Steps>

## Make a Call
Now let's make a call to our LLMstudio instance on GCP!



<Steps>
<Step title="Set Up Project">
Setup a simple project with this two files:
1. `simple-call.ipynb`
2. `.env`
</Step>

<Step title="Set Up Files">
<Tabs>
<Tab title=".env">

Go to your newly deployed **Workload**, scroll to the **Exposing services** section, and take note of the Host of your endpoint.
<Frame>
<img src="how-to/deploy-on-gcp/step-env.png" />
</Frame>

Create your `.env` file with the following:

```env .env
LLMSTUDIO_ENGINE_HOST = "YOUR_HOST"
LLMSTUDIO_ENGINE_PORT = "8001"
LLMSTUDIO_TRACKING_HOST = "YOUR_TRACKING_PORT"
LLMSTUDIO_TRACKING_PORT = "8002"
```

<Check>You are done seting up you **.env** file!</Check>

</Tab>
<Tab title="simple-call.ipynb">
Start by importing llmstudio:
```python 1st cell
from llmstudio import LLM
```

Set up your LLM. We will be using `gemini-1.5-flash` for this guide.
```python 2nd cell
llm = LLM('vertexai/gemini-1.5-flash')
```

Chat with your model.
```python 3rd cell
llm.chat('Hello!')
print(response.chat_output)
```

<Frame>
<img src="how-to/deploy-on-gcp/step-llmstudio-call.png" />
</Frame>


<Check>You are done calling llmstudio on GCP!</Check>

</Tab>

</Tabs>
</Step>


</Steps>

Binary file added docs/how-to/deploy-on-gcp/step-2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/how-to/deploy-on-gcp/step-3.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/how-to/deploy-on-gcp/step-4.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/how-to/deploy-on-gcp/step-6.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/how-to/deploy-on-gcp/step-7-1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/how-to/deploy-on-gcp/step-7.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/how-to/deploy-on-gcp/step-9-1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/how-to/deploy-on-gcp/step-9-2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/how-to/deploy-on-gcp/step-9.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/how-to/deploy-on-gcp/step-env.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Loading