-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #16658 from lchockalingam/whats-new-03-26-aimonito…
…ringga MERGE 03-28 AM: AI monitoring what's new
- Loading branch information
Showing
1 changed file
with
28 additions
and
0 deletions.
There are no files selected for viewing
28 changes: 28 additions & 0 deletions
28
src/content/whats-new/2024/03/whats-new-03-26-aimonitoringga.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,28 @@ | ||
--- | ||
title: 'New Relic AI monitoring is now generally available' | ||
summary: 'Gain in-depth insights across your AI application stack to improve performance, quality and cost' | ||
releaseDate: '2024-03-26' | ||
learnMoreLink: 'https://newrelic.com/blog/nerdlog/ai-monitoring-ga' | ||
getStartedLink: 'https://docs.newrelic.com/docs/ai-monitoring/intro-to-ai-monitoring/' | ||
--- | ||
|
||
We’re happy to announce that the industry’s first APM for AI, New Relic AI monitoring, is now available to all our customers. | ||
|
||
New Relic AI monitoring gives you deep insights and unprecedented visibility across your entire AI stack, so you can build and run AI applications with confidence. You can now take advantage of: | ||
|
||
* **Auto instrumentation:** New Relic agents come equipped with all AI monitoring capabilities, including full AI stack visibility, response tracing, model comparison, and simplified set-up for popular AI frameworks like OpenAI, Bedrock, and LangChain across Python, Node.js, Ruby, and Go languages. | ||
* **Full AI stack visibility:** Holistic view across the application, infrastructure, and the AI layer, including AI metrics like response quality and token counts. View all this alongside APM golden signals. | ||
* **LLM response overview with end-user feedback:** Quickly identify trends and outliers in LLM responses with a consolidated view. Sentiment analysis and actual user feedback are now displayed alongside AI responses, empowering you to prioritize areas for improvement, ensure unbiased outputs, and maintain user trust. | ||
* **Deep trace insights for every response:** Trace the lifecycle of complex LLM responses built with tools like LangChain to fix performance issues and quality problems such as bias, toxicity, and hallucination. | ||
* **Enhanced data security:** Safeguard sensitive data (PII) sent to your AI application with the new drop filter functionality that allows you to selectively exclude specific data types from monitoring, ensuring compliance and protecting user privacy. | ||
* **Optimized model performance and cost:** Compare performance and cost across models or services in a single view to choose the model that best fits your need. | ||
|
||
<iframe src="https://drive.google.com/file/d/1zAVYAbPPQ7NMj19GUXKjfpW9Ybq44vlt/preview" width="640" height="480" allow="autoplay"></iframe> | ||
|
||
|
||
|
||
|
||
|
||
|
||
|
||
|