diff --git a/aiconfig-docs/docs/getting-started.md b/aiconfig-docs/docs/getting-started.md
index 5264d85c7..abdba37d8 100644
--- a/aiconfig-docs/docs/getting-started.md
+++ b/aiconfig-docs/docs/getting-started.md
@@ -8,79 +8,42 @@ import constants from '@site/core/tabConstants';
# Getting Started
-AIConfig saves prompts, models and model parameters as source control friendly configs. This allows you to iterate on prompts and model parameters _separately from your application code_.
+AIConfig saves prompts, models and model parameters as source control friendly configs. This allows you to iterate on prompts and model parameters _separately from your application code_. Please read [AIConfig Basics](/docs/basics) to understand the motivation behind storing prompts and model parameters as configs.
-:::tip
-Please read [AIConfig Basics](/docs/basics) to understand the motivation behind storing prompts and model parameters as configs.
-:::
-
-## Installation
+**In this tutorial, we will create a customizable NYC travel itinerary using AIConfig.**
-The [`aiconfig` file format](/docs/overview/ai-config-format) is best used with the AIConfig SDK. To install the SDK, use your favorite package manager:
+Resources: [Tutorial Source Code](https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Getting-Started) | [Video Tutorial (shown below)](https://www.youtube.com/watch?v=X_Z-M2ZcpjA)
-#### Python
+
-
-
+### 1. Install
-```bash
-$ pip install --user python-aiconfig
-```
+The [`aiconfig` file format](/docs/overview/ai-config-format) is best used with the AIConfig SDK. Install the SDK with any of these package managers:
-
-
+
+
```bash
-$ poetry add python-aiconfig
+$ pip3 install python-aiconfig
+# or using poetry: poetry add python-aiconfig
```
-
-
-
-#### Node.js (TypeScript)
-
-
-
+
```bash
$ npm install aiconfig
+# or using yarn: yarn add aiconfig
```
-
-
-```bash
-$ yarn add aiconfig
-```
-
-
-
-:::caution
-Make sure to specify the API keys (such as `OPENAI_API_KEY`) in your environment before proceeding.
-:::
-
-## Quickstart
+### 2. Create an AIConfig
-In this quickstart, you will create a customizable NYC travel itinerary using `aiconfig`. We will start with a pre-built AIConfig that we generated from this [AI Workbook](https://lastmileai.dev/workbooks/clooqs3p200kkpe53u6n2rhr9).
+Start with a pre-built AIConfig `travel.aiconfig.json` for this tutorial. Download the AIConfig [here](https://github.com/lastmile-ai/aiconfig/blob/main/cookbooks/Getting-Started/travel.aiconfig.json).
-:::tip
-Clone the tutorial code used below from [**here**](https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Getting-Started)
-:::
-
-
-
-
-
-### 1. Download the AIConfig.
-
-This AIConfig `travel.aiconfig.json` contains a prompt chain to get a list of travel activities from an LLM and then customize the activities based on user preferences (defined as parameters of the prompt). It also contains the specific models and model parameters for the LLMs.
-
-Download AIConfig [here](https://github.com/lastmile-ai/aiconfig/blob/main/cookbooks/Getting-Started/travel.aiconfig.json).
+This AIConfig was generated from this [AI Workbook](https://lastmileai.dev/workbooks/clooqs3p200kkpe53u6n2rhr9) - a notebook editor to prototype your prompt chains. The AIConfig contains a prompt chain to get a list of travel activities from an LLM and then customizes the activities based on user preferences (defined as parameters of the prompt). It also contains the specific models and model parameters for the LLMs.
`travel.aiconfig.json`
@@ -126,29 +89,26 @@ Download AIConfig [here](https://github.com/lastmile-ai/aiconfig/blob/main/cookb
-:::tip
-Don't worry if you don't understand all parts of this yet, we'll go over it in steps. We will also cover a prompt editor ([AI Workbooks](#ai-workbook-playground)) to help you create AIConfigs visually.
+You can create AIConfigs visually using AI Workbooks - we are releasing a local editor soon to create/edit AIConfigs!
+
+:::caution
+You need to setup your [OpenAI API key](https://platform.openai.com/api-keys) in your environment before proceeding. In your CLI, set the environment variable `export OPENAI_API_KEY=my_key`.
:::
-### 2. Run the `get_activities` prompt.
+### 3. Load the AIConfig
+
+Create an `app.py` or `app.ts` file.
+
+We will load the AIConfig in our app file. Make sure you have downloaded [`travel.aiconfig.json`](https://github.com/lastmile-ai/aiconfig/blob/main/cookbooks/Getting-Started/travel.aiconfig.json) and it is in the same directory as your app file.
-You don't need to worry about how to run inference for the model; it's all handled by AIConfig. The prompt runs with gpt-3.5-turbo since that is the `default_model` for this AIConfig.
```python title="app.py"
-import asyncio
from aiconfig import AIConfigRuntime, InferenceOptions
-async def main():
- # Load the aiconfig
- aiconfig = AIConfigRuntime.load('travel.aiconfig.json')
-
- # Run a single prompt
- result = await aiconfig.run("get_activities")
- print(result)
-
-asyncio.run(main())
+# Load the AIConfig
+config = AIConfigRuntime.load('travel.aiconfig.json')
```
@@ -164,45 +124,33 @@ async function travelWithGPT() {
const aiConfig = AIConfigRuntime.load(
path.join(__dirname, "travel.aiconfig.json")
);
-
- // Run a single prompt
- const result = await aiConfig.run("get_activities");
- console.log(result);
}
-
-travelWithGPT();
```
-### 3. Enable streaming for your prompt.
+### 4. Run a prompt
-You can enable streaming for your prompt responses by passing in a streaming callback.
+Run the prompt `get_activities` from the AIConfig.
+This prompt generates a list of 10 activities to do in NYC and runs with gpt-3.5-turbo since that is the `default_model` for this AIConfig. You don't need to worry about how to run inference for the model; it's all handled by AIConfig. We will also enable streaming for the outputs.
```python title="app.py"
-import asyncio
-from aiconfig import AIConfigRuntime, InferenceOptions
+ from aiconfig import AIConfigRuntime, InferenceOptions
-async def travelWithGPT():
- aiconfig = AIConfigRuntime.load("travel.aiconfig.json")
+ # Load the AIConfig
+ config = AIConfigRuntime.load('travel.aiconfig.json')
- # Run a single prompt (with streaming)
- options = InferenceOptions(
- stream=True,
- # Write stream data to stdout
- stream_callback=lambda data, _acc, _idx: print(data.get("content", ""), end=""),
- )
- await aiconfig.run("get_activities", options=options)
-
-if __name__ == "__main__":
- asyncio.run(travelWithGPT())
+ # Run a single prompt (with streaming)
+ inference_options = InferenceOptions(stream=True)
+ await config.run("get_activities", options=inference_options)
```
+
```typescript title="app.ts"
@@ -223,29 +171,29 @@ async function travelWithGPT() {
},
};
- // Run a single prompt
+ // Run a single prompt (with streaming)
await aiConfig.run("get_activities", /*params*/ undefined, options);
}
-
-travelWithGPT();
```
-### 4. Run the `gen_itinerary` prompt.
+Run `app.py` or `app.ts`. You will see the output of the `get_activities` prompt - 10 activities to do in NYC.
+
+### 5. Run a prompt with dependencies
-This prompt depends on the output of `get_activities`. It also takes in parameters (user input) to determine the customized itinerary.
+Run the prompt `gen_itinerary` from the AIConfig. This prompt depends on the output of `get_activities`. It also takes in parameters (user input) to determine the customized itinerary.
Let's take a closer look:
-**`gen_itinerary` prompt:**
+**`gen_itinerary` prompt**
```
"Generate an itinerary ordered by {{order_by}} for these activities: {{get_activities.output}}."
```
-**prompt metadata:**
+**`gen_itinerary` prompt metadata**
```json
{
@@ -264,37 +212,64 @@ Observe the following:
2. It also depends on an `order_by` parameter
3. It uses **gpt-4**, whereas the `get_activities` prompt it depends on uses **gpt-3.5-turbo**.
-Effectively, this is a prompt chain between `gen_itinerary` and `get_activities` prompts, _as well as_ as a model chain between **gpt-3.5-turbo** and **gpt-4**.
+This is a prompt chain between `gen_itinerary` and `get_activities` prompts, _as well as_ as a model chain between **gpt-3.5-turbo** and **gpt-4**.
-Let's run this with AIConfig:
+Run the prompt `gen_itinerary`.
-Replace `config.run` above with this:
+```python title="app.py"
+ from aiconfig import AIConfigRuntime, InferenceOptions
+
+ # Load the AIConfig
+ config = AIConfigRuntime.load('travel.aiconfig.json')
+
+ # Run a single prompt (with streaming)
+ inference_options = InferenceOptions(stream=True)
+ # await config.run("get_activities", options=inference_options)
-```python
-await aiconfig.run(
+ # Run a prompt with dependencies (with streaming)
+ await config.run(
"gen_itinerary",
params={"order_by": "duration"},
- options=options,
+ options=inference_options,
run_with_dependencies=True)
```
-Replace the `aiconfig.run` call above with this:
-
-```typescript
-// Run a prompt chain, with data passed in as params
-// This will first run get_activities with GPT-3.5, and
-// then use its output to run the gen_itinerary using GPT-4
-await aiConfig.runWithDependencies(
- "gen_itinerary",
- /*params*/ { order_by: "duration" },
- options
-);
+```typescript title="app.ts"
+import * as path from "path";
+import { AIConfigRuntime, InferenceOptions } from "aiconfig";
+
+async function travelWithGPT() {
+ const aiConfig = AIConfigRuntime.load(
+ path.join(__dirname, "travel.aiconfig.json")
+ );
+
+ const options: InferenceOptions = {
+ callbacks: {
+ streamCallback: (data: any, _acc: any, _idx: any) => {
+ // Write streamed content to console
+ process.stdout.write(data?.content || "\n");
+ },
+ },
+ };
+
+ // Run a single prompt (with streaming)
+ // await aiConfig.run("get_activities", /*params*/ undefined, options);
+
+ // Run a prompt chain, with data passed in as params
+ // This will first run get_activities with GPT-3.5, and
+ // then use its output to run the gen_itinerary using GPT-4
+ await aiConfig.runWithDependencies(
+ "gen_itinerary",
+ /*params*/ { order_by: "duration" },
+ options
+ );
+}
```
@@ -304,32 +279,82 @@ await aiConfig.runWithDependencies(
Notice how simple the syntax is to perform a fairly complex task - running 2 different prompts across 2 different models and chaining one's output as part of the input of another.
:::
-### 5. Save the AIConfig.
+Run `app.py` or `app.ts`. You will see the output of the `gen_itinerary` prompt - customized itinerary in NYC.
-Let's save the AIConfig back to disk, and serialize the outputs from the latest inference run as well:
+### 6. Save the AIConfig
+
+Save the AIConfig back to disk and serialize the outputs from the latest inference run.
-```python
-# Save the aiconfig to disk. and serialize outputs from the model run
-aiconfig.save('updated.aiconfig.json', include_outputs=True)
+```python title="app.py"
+ from aiconfig import AIConfigRuntime, InferenceOptions
+
+ # Load the AIConfig
+ config = AIConfigRuntime.load('travel.aiconfig.json')
+
+ # Run a single prompt (with streaming)
+ inference_options = InferenceOptions(stream=True)
+ # await config.run("get_activities", options=inference_options)
+
+ # Run a prompt with dependencies (with streaming)
+ await config.run(
+ "gen_itinerary",
+ params={"order_by": "duration"},
+ options=inference_options,
+ run_with_dependencies=True)
+
+ # Save the aiconfig to disk and serialize outputs from the model run
+ config.save('updated.aiconfig.json', include_outputs=True)
```
-```typescript
-// Save the AIConfig to disk, and serialize outputs from the model run
-aiConfig.save(
- "updated.aiconfig.json",
- /*saveOptions*/ { serializeOutputs: true }
-);
+```typescript title="app.ts"
+import * as path from "path";
+import { AIConfigRuntime, InferenceOptions } from "aiconfig";
+
+async function travelWithGPT() {
+ const aiConfig = AIConfigRuntime.load(
+ path.join(__dirname, "travel.aiconfig.json")
+ );
+
+ const options: InferenceOptions = {
+ callbacks: {
+ streamCallback: (data: any, _acc: any, _idx: any) => {
+ // Write streamed content to console
+ process.stdout.write(data?.content || "\n");
+ },
+ },
+ };
+
+ // Run a single prompt (with streaming)
+ // await aiConfig.run("get_activities", /*params*/ undefined, options);
+
+ // Run a prompt chain, with data passed in as params
+ // This will first run get_activities with GPT-3.5, and
+ // then use its output to run the gen_itinerary using GPT-4
+ await aiConfig.runWithDependencies(
+ "gen_itinerary",
+ /*params*/ { order_by: "duration" },
+ options
+ );
+
+ // Save the AIConfig to disk, and serialize outputs from the model run
+ aiConfig.save(
+ "updated.aiconfig.json",
+ /*saveOptions*/ { serializeOutputs: true }
+ );
+}
```
+Run `app.py` or `app.ts`. You will see `updated.aiconfig.json` in your current directory with the outputs of your latest model runs.
+
:::note
The AIConfig SDK supports [CRUD](https://en.wikipedia.org/wiki/Create,_read,_update_and_delete) operations for prompts, models, parameters, and arbitrary metadata in the `aiconfig`.
@@ -337,34 +362,28 @@ The AIConfig SDK supports [CRUD](https://en.wikipedia.org/wiki/Create,_read,_upd
:::
-### 6. Open the AIConfig in AI Workbook editor.
+### 7. Edit the AIConfig in Notebook Editor
-We can iterate on an `aiconfig` using a notebook-like editor called an **AI Workbook**. Now that we have an `aiconfig` file artifact that encapsulates the generative AI part of our application, the application code doesn't need to change even as the `aiconfig` is updated.
+We can iterate on an `aiconfig` using an **AI Workbook** - a notebook editor to prototype your prompt chains. Now that we have an `aiconfig` file artifact that encapsulates the generative AI part of our application, the application code doesn't need to change even as the `aiconfig` is updated.
1. Go to https://lastmileai.dev.
2. Go to Workbooks page: https://lastmileai.dev/workbooks
3. Click dropdown from '+ New Workbook' and select 'Create from AIConfig'
4. Upload `travel.aiconfig.json`
-Try out the workbook playground here: **[NYC Travel Workbook](https://lastmileai.dev/workbooks/clooqs3p200kkpe53u6n2rhr9)**
+Try out the AI Workbook for this tutorial: **[NYC Travel Workbook](https://lastmileai.dev/workbooks/clooqs3p200kkpe53u6n2rhr9)**
+
+
:::note
We are working on a local editor that you can run yourself. For now, please use the hosted version on https://lastmileai.dev.
:::
-
+## Next Steps
-```
-
-```
+We have several other tutorials and examples to help get you started:
-### Code for Getting Started
-
-Tutorial code can be found [here](https://github.com/lastmile-ai/aiconfig/tree/ad38040ec3d9f0273e006464e01e02b06f2809e9/cookbook/Getting-Started).
-
-## Additional Guides
-
-There is a lot you can do with `aiconfig`. We have several other tutorials to help get you started:
+**How-to Guides**
- [Create an AIConfig from scratch](/docs/overview/create-an-aiconfig)
- [Run a prompt](/docs/overview/run-aiconfig)
@@ -372,10 +391,18 @@ There is a lot you can do with `aiconfig`. We have several other tutorials to he
- [Prompt chains](/docs/overview/define-prompt-chain)
- [Callbacks and monitoring](/docs/overview/monitoring-aiconfig)
-### Cookbooks
+**Cookbook**
+Start with these recipes and access more [here](https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks):
+
+- [OpenAI Function Calling](https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Function-Calling-OpenAI)
- [CLI Chatbot](https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Wizard-GPT)
- [RAG with AIConfig](https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/RAG-with-AIConfig)
-- [Prompt routing](https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Basic-Prompt-Routing)
-- [OpenAI function calling](https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Function-Calling-OpenAI)
-- [Chain of thought](https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Chain-of-Verification)
+- [Prompt Routing](https://github.com/lastmile-ai/aiconfig/tree/main/cookbooks/Basic-Prompt-Routing)
+
+## Coming Soon
+
+- Local editor to iterate on your AIConfigs
+- Evaluation interfaces: allow AIConfig artifacts to be evaluated with user-defined eval functions
+- OpenAI Assistants API support
+- Multi-modal ModelParsers: GPT4-V support, Whisper, HuggingFace image generation