Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Typos and prose fixes in Konnect Advanced Analytics section #8394

Merged
merged 1 commit into from
Feb 12, 2025
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions app/konnect/analytics/explorer.md
Original file line number Diff line number Diff line change
@@ -121,7 +121,7 @@ Traffic metrics provide insight into which of your services are being used and h
| Completion Tokens | Count | Completion tokens are any tokens that the model generates in response to an input. |
| Prompt Tokens | Count | Prompt tokens are the tokens input into the model. |
| Total Tokens | Count | Sum of all tokens used in a single request to the model. It includes both the tokens in the input (prompt) and the tokens generated by the model (completion). |
| Time per Tokens | Number | Average time in milliseconds to generate a token. Calculated as LLM latency divided by the of tokens. |
| Time per Tokens | Number | Average time in milliseconds to generate a token. Calculated as LLM latency divided by the number of tokens. |
| Costs | Cost | Represents the resulting costs for a request. Final costs = (total number of prompt tokens × input cost per token) + (total number of completion tokens × output cost per token) + (total number of prompt tokens × embedding cost per token). |
| Response Model | String | Represents which AI model was used to process the prompt by the AI provider. |
| Request Model | String | Represents which AI model was used to process the prompt. |
@@ -193,14 +193,14 @@ One way you can build custom reports is by navigating to {% konnect_icon analyti
* **With**: Kong Latency (p95), Upstream Latency (p95)
* **Per**: Minute

Then, you can add a filter to filter by the control plane:
Then, you can add a filter for the control plane:

* **Filter By**: Control Plane
* **Choose Operator**: In
* **Filter Value**: prod

![Production - Kong vs Upstream Latency (last hour)](/assets/images/products/konnect/analytics/custom-reports/kong-vs-upstream-latency.png){:.image-border}
> _**Figure 1:** Line chart showing average upstream and Kong latency over the last hour. ._
> _**Figure 1:** Line chart showing average upstream and Kong latency over the last hour._

## More information

4 changes: 2 additions & 2 deletions app/konnect/analytics/use-cases/latency.md
Original file line number Diff line number Diff line change
@@ -22,14 +22,14 @@ You can build custom reports by navigating to {% konnect_icon analytics %} **Ana
{:.note}
> You can select more than one metric by clicking on **Select Multiple** next to the Metrics dropdown list.

Then, they add a filter to filter by the control plane
Then, you can add a filter for the control plane:

* **Filter by**: Control Plane
* **Operator**: In
* **Value**: prod

![Production - Kong vs Upstream Latency (last hour)](/assets/images/products/konnect/analytics/custom-reports/kong-vs-upstream-latency.png){:.image-border}
> _**Figure 1:** Line chart showing average upstream and Kong latency over the last hour. ._
> _**Figure 1:** Line chart showing average upstream and Kong latency over the last hour._

## Conclusion