Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

0.3.12 #151

Merged
merged 3 commits into from
Oct 16, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
47 changes: 46 additions & 1 deletion .github/workflows/upload-pypi-dev.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
name: Upload Python package to PyPI as dev pre-release
name: Upload Python package to PyPI as dev release and build/push Docker image.

on:
workflow_dispatch:
Expand Down Expand Up @@ -39,3 +39,48 @@ jobs:
git add pyproject.toml
git commit -m "[fix] bump prerelease version in pyproject.toml"
git push

# Wait for PyPI to update
- name: Wait for PyPI to update
run: |
VERSION=$(poetry version --short)
echo "Checking for llmstudio==$VERSION on PyPI..."
for i in {1..10}; do
if python -m pip install llmstudio==${VERSION} --dry-run >/dev/null 2>&1; then
echo "Package llmstudio==${VERSION} is available on PyPI."
break
else
echo "Package llmstudio==${VERSION} not available yet. Waiting 15 seconds..."
sleep 15
fi
if [ $i -eq 10 ]; then
echo "Package did not become available in time."
exit 1
fi
done

# Docker build and push section
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2

- name: Log in to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}

- name: Extract version for tagging Docker image
id: get_version
run: |
echo "VERSION=$(poetry version --short)" >> $GITHUB_ENV

- name: Build and tag Docker image
run: |
docker build \
--build-arg LLMSTUDIO_VERSION=${{ env.VERSION }} \
-t tensoropsai/llmstudio:${{ env.VERSION }} \
.

- name: Push Docker image to Docker Hub
run: |
docker push tensoropsai/llmstudio:${{ env.VERSION }}
56 changes: 55 additions & 1 deletion .github/workflows/upload-pypi.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
name: Upload Python package to PyPI
name: Upload Python package to PyPI and build/push Docker images.

on:
push:
Expand All @@ -11,23 +11,77 @@ jobs:
deploy:
runs-on: ubuntu-latest
steps:
# Checkout the code
- name: Checkout code
uses: actions/checkout@v2

# Set up Python environment
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: "3.x"

# Install Poetry
- name: Install Poetry
run: |
curl -sSL https://install.python-poetry.org | python3 -

# Configure Poetry with PyPI token
- name: Configure Poetry
run: |
poetry config pypi-token.pypi ${{ secrets.PYPI_API_TOKEN }}

# Build and publish package to PyPI
- name: Build and publish to PyPI
run: |
poetry build
poetry publish

# Extract the new version number from pyproject.toml
- name: Extract version for tagging Docker image
run: |
echo "VERSION=$(poetry version --short)" >> $GITHUB_ENV

# Wait for the package to become available on PyPI
- name: Wait for PyPI to update
run: |
echo "Checking for llmstudio==${{ env.VERSION }} on PyPI..."
for i in {1..10}; do
if python -m pip install llmstudio==${{ env.VERSION }} --dry-run >/dev/null 2>&1; then
echo "Package llmstudio==${{ env.VERSION }} is available on PyPI."
break
else
echo "Package llmstudio==${{ env.VERSION }} not available yet. Waiting 15 seconds..."
sleep 15
fi
if [ $i -eq 10 ]; then
echo "Package did not become available in time."
exit 1
fi
done

# Set up Docker Buildx
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2

# Log in to Docker Hub
- name: Log in to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}

# Build and tag Docker images with both :latest and :[NEW_VERSION]
- name: Build and tag Docker images
run: |
docker build \
--build-arg LLMSTUDIO_VERSION=${{ env.VERSION }} \
-t tensoropsai/llmstudio:latest \
-t tensoropsai/llmstudio:${{ env.VERSION }} \
.

# Push both Docker images to Docker Hub
- name: Push Docker images to Docker Hub
run: |
docker push tensoropsai/llmstudio:${{ env.VERSION }}
docker push tensoropsai/llmstudio:latest
80 changes: 80 additions & 0 deletions docs/how-to/build-a-tool-agent.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,80 @@
This guide outlines how to build a tool calling agent using Langchain + LLMstudio.

## 1. Set up your tools
Start by defining the tools your agent is going to have access to.
```python
from langchain.tools import tool

@tool
def buy_ticket(destination: str):
"""Use this to buy a ticket"""
return "Bought ticket number 270924"

@tool
def get_departure(ticket_number: str):
"""Use this to fetch the departure time of a train"""
return "8:25 AM"
```

## 2. Setup your .env
Create a `.env` file on the root of your project with the the credentials for the providers you want to use.

<Tabs>
<Tab title="OpenAI">
```
OPENAI_API_KEY="YOUR_API_KEY"
```
</Tab>
<Tab title="VertexAI">
```
GOOGLE_API_KEY="YOUR_API_KEY"
```
</Tab>
<Tab title="Azure">
```
AZURE_BASE_URL="YOUR_MODEL_ENDPOINT"
AZURE_API_KEY="YOUR_API_KEY"
```
</Tab>
</Tabs>

## 3. Set up your model using LLMstudio
Use LLMstudio to choose the provider and model you want to use.
<Tabs>
<Tab title="OpenAI">
```python
model = ChatLLMstudio(model_id='openai/gpt-4o')
```
</Tab>
<Tab title="VertexAI">
```python
model = ChatLLMstudio(model_id='vertexai/gemini-1.5-flash')
```
</Tab>
<Tab title="Azure">
```python
model = ChatLLMstudio(model_id='azure/Meta-Llama-3.1-70B-Instruct')
```
</Tab>
</Tabs>

## 4. Build the agent
Set up your agent and agent executor using Langchain.

```python
from langchain import hub
from langchain.agents import AgentExecutor, create_openai_tools_agent

prompt = hub.pull("hwchase17/openai-tools-agent")
agent = create_openai_tools_agent(model, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools)

input = "Can you buy me a ticket to madrid?"

# Using with chat history
agent_executor.invoke(
{
"input": input,
}
)
```
156 changes: 156 additions & 0 deletions docs/how-to/deploy-on-gke/deploy-on-google-kubernetes-engine.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,156 @@
Learn how to deploy LLMstudio as a containerized application on Google Kubernetes Engine and make calls from a local repository.


## Prerequisites
To follow this guide you need to have the following set-up:

- A **project** on google cloud platform.
- **Kubernetes Engine** API enabled on your project.
- **Kubernetes Engine Admin** role for the user performing the guide.

## Deploy LLMstudio

This example demonstrates a public deployment. For a private service accessible only within your enterprise infrastructure, deploy it within your own Virtual Private Cloud (VPC).
<Steps>
<Step title="Navigate to Kubernetes Engine">
Begin by navigating to the Kubernetes Engine page.
</Step>
<Step title="Select Deploy">
Go to **Workloads** and **Create a new Deployment**.
<Frame>
<img src="how-to/deploy-on-gke/images/step-2.png" />
</Frame>
</Step>
<Step title="Name Your Deployment">
Rename your project. We will call the one in this guide **llmstudio-on-gcp**.
<Frame>
<img src="how-to/deploy-on-gke/images/step-3.png" />
</Frame>
</Step>
<Step title="Select Your Cluster">
Choose between **creating a new cluster** or **using an existing cluster**.
For this guide, we will create a new cluster and use the default region.
<Frame>
<img src="how-to/deploy-on-gke/images/step-4.png" />
</Frame>
</Step>
<Step title="Proceed to Container Details">
Once done done with the **Deployment configuration**, proceed to **Container details**.
</Step>
<Step title="Set Image Path">
In the new container section, select **Existing container image**.


Copy the path to LLMstudio's image available on Docker Hub.
```bash Image Path
tensoropsai/llmstudio:latest
```
Set it as the **Image path** to your container.
<Frame>
<img src="how-to/deploy-on-gke/images/step-6.png" />
</Frame>
</Step>
<Step title="Set Environment Variables">
Configure the following mandatory environment variables:
| Environment Variable | Value |
|----------------------------|-----------|
| `LLMSTUDIO_ENGINE_HOST` | 0.0.0.0 |
| `LLMSTUDIO_ENGINE_PORT` | 8001 |
| `LLMSTUDIO_TRACKING_HOST` | 0.0.0.0 |
| `LLMSTUDIO_TRACKING_PORT` | 8002 |

Additionally, set the `GOOGLE_API_KEY` environment variable to enable calls to Google's Gemini models.
<Tip>Refer to **SDK/LLM/Providers** for instructions on setting up other providers.</Tip>

<Frame>
<img src="how-to/deploy-on-gke/images/step-7.png" />
</Frame>

</Step>
<Step title="Proceed to Expose (Optional)">
After configuring your container, proceed to **Expose (Optional)**.
</Step>
<Step title="Expose Ports">
Select **Expose deployment as a new service** and leave the first item as is.

<Frame>
<img src="how-to/deploy-on-gke/images/step-9-1.png" />
</Frame>

Add two other items, and expose the ports defined in the **Set Environment Variables** step.

<Frame>
<img src="how-to/deploy-on-gke/images/step-9-2.png" />
</Frame>
</Step>
<Step title="Deploy">
After setting up and exposing the ports, press **Deploy**.
<Check>You have successfully deployed **LLMstudio on Google Cloud Platform**!</Check>
</Step>

</Steps>

## Make a Call
Now let's make a call to our LLMstudio instance on GCP!



<Steps>
<Step title="Set Up Project">
Setup a simple project with this two files:
1. `calls.ipynb`
2. `.env`
</Step>

<Step title="Set Up Files">
<Tabs>
<Tab title=".env">

Go to your newly deployed **Workload**, scroll to the **Exposing services** section, and take note of the Host of your endpoint.
<Frame>
<img src="how-to/deploy-on-gke/images/step-env.png" />
</Frame>

Create your `.env` file with the following:

```env .env
LLMSTUDIO_ENGINE_HOST = "YOUR_HOST"
LLMSTUDIO_ENGINE_PORT = "8001"
LLMSTUDIO_TRACKING_HOST = "YOUR_HOST"
LLMSTUDIO_TRACKING_PORT = "8002"
```

<Check>You are done seting up you **.env** file!</Check>

</Tab>
<Tab title="calls.ipynb">
Start by importing llmstudio:
```python 1st cell
from llmstudio import LLM
```

Set up your LLM. We will be using `gemini-1.5-flash` for this guide.
```python 2nd cell
llm = LLM('vertexai/gemini-1.5-flash')
```

Chat with your model.
```python 3rd cell
llm.chat('Hello!')
print(response.chat_output)
```

<Frame>
<img src="how-to/deploy-on-gke/images/step-llmstudio-call.png" />
</Frame>


<Check>You are done calling llmstudio on GCP!</Check>

</Tab>

</Tabs>
</Step>


</Steps>
Binary file added docs/how-to/deploy-on-gke/images/step-2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/how-to/deploy-on-gke/images/step-3.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/how-to/deploy-on-gke/images/step-4.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/how-to/deploy-on-gke/images/step-6.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/how-to/deploy-on-gke/images/step-7-1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/how-to/deploy-on-gke/images/step-7-2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/how-to/deploy-on-gke/images/step-9-1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/how-to/deploy-on-gke/images/step-9-2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/how-to/deploy-on-gke/images/step-env.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Loading