Skip to content

Commit

Permalink
Merge pull request #145 from lastmile-ai/Haniehz1-patch-1
Browse files Browse the repository at this point in the history
Create experiments.mdx
  • Loading branch information
andrew-lastmile authored Feb 21, 2025
2 parents 4fd51f7 + a0ddca4 commit d34ffdc
Showing 1 changed file with 180 additions and 0 deletions.
180 changes: 180 additions & 0 deletions website/docs/autoeval/experiments.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,180 @@
---
title: "Experimentation"
---

import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
import constants from "@site/core/tabConstants";

# Experimentation

Track and manage experiments on your LLMs and applications to compare performance, test changes, and validate improvements.

## What is an Experiment?

An *Experiment* is a collection of *evaluation runs*. Each *evaluation run* consists of the *dataset* and *metrics* for that run. For example, let's say I have a dataset of customer support chatbot questions, answers, and context. For my *Customer Support Experiment*, I can run this dataset against AutoEval's *relevancy*, *toxicity*, and *faithfulness* metrics as an *evaluation run*. Next, I can update the model (let's say to Gemini) for the chatbot and generate new answers and context for the same set of questions. Then, I can save that as another *evaluation run* and compare the results to determine if the model change was a positive improvement.

*Experiments* enable you to confidently make iterative changes to your LLM application in a structured and organzied way.

## What types of changes can Experiments measure?
Anything that influences the LLM application's performance is measureable through an experiment, such as:
- Updates to the **LLM model**, such as a new training date
- Modifying the **retrieval strategy** for a **RAG system**
- Adjusting **system prompts** for an agent
- And more

## Usage Guide
This guide walks through the process of setting up and running experiments using AutoEval, including:
1. **Setting up the API key** and **creating a project**
2. **Preparing and uploading a dataset**
3. **Creating an Experiment**
4. **Evaluating the dataset** against default metrics, logging results, and iterating on changes
5. **Visualizing the results** in the Experiments Console

---

### 1. Set Up AutoEval Client

Before running experiments, ensure you have the latest version of AutoEval:

```bash
pip install lastmile --upgrade
```

#### Authenticate with the LastMile AI API

To interact with the **LastMile AI API**, set your API key as an environment variable.

📌 **Tip:** If you don't have an API key yet, visit the **LastMile AI dashboard**, navigate to the **API section** on the sidebar, and copy your key.

```python
import os

api_token = "YOUR_API_KEY_HERE"

if not api_token:
print("Error: Please set your API key in the environment variable LASTMILE_API_KEY")
else:
print("✓ API key successfully configured!")
```

### Set Up AutoEval Client
Once authenticated, initialize the **AutoEval client**:

```python
from lastmile.lib.auto_eval import AutoEval
client = AutoEval(api_token=api_token) # Optionally set project_id to scope to a specific project
```

---

### 2. Create a Project or Select an Existing Project

A **Project** is the container that organizes your **Experiments, Evaluation runs, and Datasets**. It typically corresponds to the **AI initiative, application, or use case** you’re building.

Projects help keep evaluations structured, especially when managing multiple experiments across different AI models or applications. You can create new projects or use existing ones.

To create a new project programmatically, use:

```python
project = client.create_project(
name="Example Customer Agent Project",
description="Example project to evaluate customer support agents"
)

client.project_id = project.id
```

---

### 3. Prepare and Upload Your Dataset

You can either use an existing dataset or upload a new dataset to run an evaluation within your new experiment. For an existing dataset, you can skip to Step 4.

#### Uploading a new dataset

LastMile AI AutoEval expects a **CSV file** with the following columns:
- **`input`**: The user's query or input text
- **`output`**: The assistant's response to the user's query
- **`ground_truth`** *(optional)*: The correct or expected response for comparison. This can also be the context for metrics like *faithfulness*

Uploading this dataset allows you to evaluate how well the assistant's responses align with the **ground truth** using LastMile AI’s evaluation metrics.

To upload your dataset, use the following code:

```python
dataset_csv_path = "ADD_YOUR_DATASET_HERE"

dataset_id = client.upload_dataset(
file_path=dataset_csv_path,
name="NAME_OF_YOUR_DATASET",
description="DESCRIPTION_OF_DATASET"
)

print(f"Dataset created with ID: {dataset_id}")
```

---

### 4. Create an Experiment

To create an experiment, use the following code:

:::info
The experiment's metadata should be used to track important information (or parameters) in regards to the application that is being tested. Important metadata to track include the LLM being used, the LLM parameters, prompt version, dataset version, the application, etc.
:::

```python
experiment = client.create_experiment(
name="EXPERIMENT_NAME",
description="EXPERIMENT_DESCRIPTION",
metadata={
"model": "gpt-4o",
"temperature": 0.8,
"misc": {
"dataset_version": "0.1.1",
"app": "customer-support"
}
}
)
```

---

### 5. Evaluate the Dataset Against Built-in Metrics

:::info
You can specify custom and fine-tuned metrics to be evaluated for that dataset.
:::

```python
from lastmile.lib.auto_eval import BuiltinMetrics

default_metrics = [
BuiltinMetrics.FAITHFULNESS,
BuiltinMetrics.RELEVANCE,
]

print("Evaluation job kicked off")

evaluation_results = client.evaluate_dataset(
dataset_id=dataset_id,
metrics=default_metrics,
experiment_id=experiment.id,
metadata={"extras": "Base metric tests"}
)

print("Evaluation Results:")
evaluation_results.head(10)
```
---

### 6. Visualize in the Experiments Console

📊 **Explore your results in the AutoEval UI:**
- 🔬 **Experiments Overview:** [View all experiments](https://lastmileai.dev/evaluations?view=experiments)
- 📈 **Evaluation Runs:** [See all evaluation runs](https://lastmileai.dev/evaluations?view=all_runs)
- 🏢 **Project Dashboard:** [Manage projects and experiments](https://lastmileai.dev/dashboard)
- 📂 **Dataset Library:** [Browse and manage uploaded datasets](https://lastmileai.dev/datasets)

**Check out the [full cookbook](https://github.com/lastmile-ai/lastmile-docs/blob/main/cookbook/AutoEval_Experiments.ipynb) for expanded details on the functionality**

0 comments on commit d34ffdc

Please sign in to comment.