Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MLOB-2058] Add Node.js Quickstart Sections for LLM Observability #27211

Open
wants to merge 6 commits into
base: master
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
90 changes: 76 additions & 14 deletions content/en/llm_observability/quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@

## Overview

This guide uses the [LLM Observability SDK for Python][1]. If your application is written in another language, you can create traces by calling the [API][8] instead.
This guide uses the LLM Observability SDKs for [Python][1] and [Node.js][2]. If your application is written in another language, you can create traces by calling the [API][8] instead.

## Setup

Expand All @@ -23,7 +23,7 @@

## Command line

To generate an LLM Observability trace, you can run a Python script.
To generate an LLM Observability trace, you can run a Python or Node.js script.

### Prerequisites

Expand All @@ -32,16 +32,33 @@
- An OpenAI API key stored in your environment as `OPENAI_API_KEY`. To create one, see [Account Setup][4] and [Set up your API key][6] in the official OpenAI documentation.
- The OpenAI Python library installed. See [Setting up Python][5] in the official OpenAI documentation for instructions.

1. Install the SDK by adding the `ddtrace` and `openai` packages:
1. Install the SDK and OpenAI packages:

{{< code-block lang="shell" >}}
{{< tabs >}}
{{% tab "Python" %}}

```shell
pip install ddtrace
pip install openai
{{< /code-block >}}
```

{{% /tab %}}
{{% tab "Node.js" %}}

```shell
npm install dd-trace
npm install openai
```

1. Create a Python script and save it as `quickstart.py`. This Python script makes a single OpenAI call.
{{% /tab %}}
{{< /tabs >}}

2. Create a script, which will make a single OpenAI call.

Check warning on line 56 in content/en/llm_observability/quickstart.md

View workflow job for this annotation

GitHub Actions / vale

Datadog.tense

Avoid temporal words like 'will'.

{{< code-block lang="python" filename="quickstart.py" >}}
{{< tabs >}}
{{% tab "Python" %}}

```python
import os
from openai import OpenAI

Expand All @@ -54,40 +71,85 @@
{"role": "user", "content": "I'd like to buy a chair for my living room."},
],
)
{{< /code-block >}}
```

{{% /tab %}}
{{% tab "Node.js" %}}

```javascript
const { OpenAI } = require('openai');

const oaiClient = new OpenAI(process.env.OPENAI_API_KEY);

function main () {
const completion = await oaiClient.chat.completions.create({
model: 'gpt-3.5-turbo',
messages: [
{ role: 'system', content: 'You are a helpful customer assistant for a furniture store.' },
{ role: 'user', content: 'I\'d like to buy a chair for my living room.' },
]
});
}

1. Run the Python script with the following shell command. This sends a trace of the OpenAI call to Datadog.
main();
```

{{% /tab %}}
{{< /tabs >}}

3. Run the script with the following shell command. This sends a trace of the OpenAI call to Datadog.

{{< tabs >}}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

adding tabs here seems to make the page jump when clicking between tabs, but just for this section.

{{% tab "Python" %}}

```shell
DD_LLMOBS_ENABLED=1 DD_LLMOBS_ML_APP=onboarding-quickstart \
DD_API_KEY=<YOUR_DATADOG_API_KEY> DD_SITE={{< region-param key="dd_site" >}} \
DD_LLMOBS_AGENTLESS_ENABLED=1 ddtrace-run python quickstart.py
```
Comment on lines 105 to 109
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this now seems to render the DD_SITE portion as:

DD_SITE=<span class="js-region-param region-param" data-region-param="dd_site"></span>

without any changes to the actual block, similar for the Node.js tab. Do these region shortcodes not work inside of tab shortcodes?


For more information about required environment variables, see [the SDK documentation][9].
For more information about required environment variables, see [the SDK documentation][1].

[1]: /llm_observability/setup/sdk/python/#command-line-setup

{{% /tab %}}
{{% tab "Node.js" %}}

```shell
DD_LLMOBS_ENABLED=1 DD_LLMOBS_ML_APP=onboarding-quickstart \
DD_API_KEY=<YOUR_DATADOG_API_KEY> DD_SITE={{< region-param key="dd_site" >}} \
DD_LLMOBS_AGENTLESS_ENABLED=1 NODE_OPTIONS="--import dd-trace/initialize.mjs" node quickstart.js
```

For more information about required environment variables, see [the SDK documentation][1].

[1]: /llm_observability/setup/sdk/nodejs/#command-line-setup

{{% /tab %}}
{{< /tabs >}}

**Note**: `DD_LLMOBS_AGENTLESS_ENABLED` is only required if you do not have the Datadog Agent running. If the Agent is running in your production environment, make sure this environment variable is unset.

1. View the trace of your LLM call on the **Traces** tab [of the **LLM Observability** page][3] in Datadog.
4. View the trace of your LLM call on the **Traces** tab [of the **LLM Observability** page][3] in Datadog.

{{< img src="llm_observability/quickstart_trace_1.png" alt="An LLM Observability trace displaying a single LLM request" style="width:100%;" >}}

The trace you see is composed of a single LLM span. The `ddtrace-run` command automatically traces your LLM calls from [Datadog's list of supported integrations][10].
The trace you see is composed of a single LLM span. The `ddtrace-run` or `NODE_OPTIONS="--import dd-trace/initialize.mjs"` command automatically traces your LLM calls from [Datadog's list of supported integrations][10].

If your application consists of more elaborate prompting or complex chains or workflows involving LLMs, you can trace it using the [Setup documentation][11] and the [SDK documentation][1].

## Further Reading

{{< partial name="whats-next/whats-next.html" >}}

[1]: /llm_observability/setup/sdk/
[1]: /llm_observability/setup/sdk/python
[2]: /llm_observability/setup/sdk/nodejs
[3]: https://app.datadoghq.com/llm/traces
[4]: https://platform.openai.com/docs/quickstart/account-setup
[5]: https://platform.openai.com/docs/quickstart/step-1-setting-up-python
[6]: https://platform.openai.com/docs/quickstart/step-2-set-up-your-api-key
[7]: /account_management/api-app-keys/#add-an-api-key-or-client-token
[8]: /llm_observability/setup/api
[9]: /llm_observability/setup/sdk/#command-line-setup
[10]: /llm_observability/setup/auto_instrumentation/
[11]: /llm_observability/setup/
[12]: https://github.com/DataDog/llm-observability
Loading