You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We need to create an API endpoint that allows querying and retrieving detailed traces for a specific run of an agent. A run represents a single iteration through the workflow and logs structured data, such as the state before execution, interpolated prompts, LLM output, actions taken, and associated outputs. This endpoint will support developers and prompt engineers in debugging, analyzing, and improving the agent’s behavior. It should also facilitate non-technical users to review run data easily by integrating with swagger UI.
Objectives
Develop an API endpoint to retrieve run traces.
Ensure the data returned includes all relevant categories of information:
State before execution.
Interpolated prompts.
Output of the LLM.
Actions taken and their results.
Support optional query parameters for efficient filtering and pagination.
Make the endpoint optimized for both technical and non-technical users (UI integration-ready).
Ensure efficient database querying with proper indexing.
Define API Route
Create a GET /runs/{run_id}/traces endpoint.
Accept optional query parameters such as:
agent_name: Filter by the agent associated with the run.
date_range: Start and end dates to filter runs.
events: Filter by specific categories of logged events (e.g., "LLM output," "actions").
Database Query Design
Design a query to fetch all traces for a run, ensuring:
Efficient use of indexes for run_id, agent_name, and timestamp.
Support for optional filters, such as date range and event types.
Add database indexing where necessary to ensure high performance.
Implement API Endpoint
Write the API logic to:
Retrieve data based on run_id.
Apply optional filters dynamically (agent name, date range, etc.).
Return data in execution order (sorted by timestamp).
Add support for pagination using page and limit query parameters.
Test Data Logging and Retrieval
Ensure the database stores all required categories for a run:
State before execution.
Interpolated prompt.
LLM output.
Actions and results.
Validate that all data is retrievable in the expected structured format.
Output Formatting
Structure the output to make it easy for both technical users (prompt engineers) and non-technical users (UI review) to understand:
Include input/output data in a readable format.
Organize events sequentially for clear traceability.
Testing and Validation
Write unit tests for the endpoint to validate:
Proper query execution.
Filtering, pagination, and optional parameters.
Conduct load testing to ensure efficient performance with large datasets.
Ensure the data returned matches execution order and is complete.
Acceptance Criteria:
API Functionality:
GET /runs/{run_id}/traces endpoint retrieves all traces for a given run.
Supports optional query parameters: agent_name, date_range, events, page, and limit.
Data is returned in sequential execution order.
Output includes state before execution, interpolated prompt, LLM output, actions taken, and action results.
Performance:
Queries are optimized using database indexing for run_id, agent_name, and timestamp.
Pagination and filtering work efficiently even with large datasets
Scalability:
The endpoint handles a large number of records without noticeable performance degradation.
Testing:
Unit tests cover various scenarios (e.g., missing run_id, invalid filters, empty results).
Load testing validates performance with large-scale data.
The text was updated successfully, but these errors were encountered:
@jzvikart it's noted here that we need to sync with @snobbee about the docker container implementation and exposing the DB to allow us access (ports) to tunnel into the docker container or deploy this code inside as part of the build.
@VisionOra you mentioned this is completed (on ticket #196 ) please add a comment here and or associate a PR with the updates so we can review/give feedback/merge into the codebase.
Description
We need to create an API endpoint that allows querying and retrieving detailed traces for a specific run of an agent. A run represents a single iteration through the workflow and logs structured data, such as the state before execution, interpolated prompts, LLM output, actions taken, and associated outputs. This endpoint will support developers and prompt engineers in debugging, analyzing, and improving the agent’s behavior. It should also facilitate non-technical users to review run data easily by integrating with swagger UI.
Objectives
Define API Route
Database Query Design
Support for optional filters, such as date range and event types.
Add database indexing where necessary to ensure high performance.
Implement API Endpoint
Test Data Logging and Retrieval
Output Formatting
Testing and Validation
Acceptance Criteria:
API Functionality:
Performance:
Scalability:
Testing:
The text was updated successfully, but these errors were encountered: