Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/improved event logging #2214

Merged
merged 7 commits into from
Jan 9, 2025
Merged

Conversation

essweine
Copy link
Collaborator

@essweine essweine commented Jan 9, 2025

Log all events generated by the backend to the event stream, with as much process specific into as possible and simplify the event format.

Summary by CodeRabbit

  • New Features

    • Added configuration for event stream source with default value.
    • Enhanced logging capabilities with more detailed error tracking.
    • Improved metadata extraction and storage for process instances.
  • Bug Fixes

    • Corrected a typo in an exception message ("happend" to "happened").
  • Refactor

    • Streamlined logging structure and method signatures.
    • Simplified metadata handling in process instance processing.
  • Chores

    • Updated logging and event tracking mechanisms across multiple services.

Copy link
Contributor

coderabbitai bot commented Jan 9, 2025

Warning

Rate limit exceeded

@essweine has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 7 minutes and 26 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between 283db6a and 8f60246.

📒 Files selected for processing (1)
  • spiffworkflow-backend/src/spiffworkflow_backend/services/logging_service.py (3 hunks)
📝 Walkthrough

Walkthrough

This pull request introduces enhancements to logging and metadata handling across multiple services in the SpiffWorkflow backend. The changes focus on improving event logging, metadata extraction, and configuration management. A new configuration variable for event stream source is added, and several services are modified to provide more detailed logging, including changes to log formatting, event tracking, and metadata storage. The modifications aim to provide more comprehensive and flexible logging capabilities while simplifying some existing code structures.

Changes

File Change Summary
config/default.py Added new configuration variable SPIFFWORKFLOW_BACKEND_EVENT_STREAM_SOURCE with default value "spiffworkflow.org"
routes/process_instances_controller.py Updated logging in process_instance_create to use a log_extras dictionary with additional context
services/logging_service.py - Renamed _spiff_logger from "spiff" to "spiff.event"
- Modified SpiffLogHandler to change JSON output structure
- Updated log_event method signature to accept message and log_extras
services/process_instance_processor.py - Refactored extract_metadata to return a dictionary
- Added new store_metadata method for handling metadata storage
services/process_instance_tmp_service.py Added log_event parameter to add_event_to_process_instance with enhanced error logging
services/process_model_service.py Added new extract_metadata class method for metadata extraction
services/task_service.py Added log_event parameter to update_task_model_with_spiff_task
services/workflow_execution_service.py - Added imports for LoggingService and ProcessModelService
- Enhanced did_complete_task method with detailed logging and metadata extraction

Sequence Diagram

sequenceDiagram
    participant WES as Workflow Execution Service
    participant PS as Process Model Service
    participant LS as Logging Service
    participant PIP as Process Instance Processor

    WES->>PS: extract_metadata()
    PS-->>WES: return metadata
    WES->>LS: log_event(message, log_extras)
    WES->>PIP: store_metadata(metadata)
    PIP-->>WES: metadata stored
Loading

Possibly related PRs

Finishing Touches

  • 📝 Generate Docstrings

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
spiffworkflow-backend/src/spiffworkflow_backend/services/logging_service.py (1)

62-100: Consider extracting property lists to class constants.

The property lists for different record types could be moved to class constants to improve maintainability and reusability.

 class SpiffLogHandler(SocketHandler):
+    TASK_PROPERTIES = [
+        "workflow_spec",
+        "task_spec",
+        "task_id",
+        "task_type",
+        "state",
+        "last_state_change",
+        "elapsed",
+        "parent",
+    ]
+    
+    WORKFLOW_PROPERTIES = ["workflow_spec", "completed", "success"]
+    
+    EVENT_PROPERTIES = ["bpmn_name", "milestone", "task_id", "task_spec", "metadata", "error_info"]
+
     def filter(self, record: Any) -> bool:
         if record.name.startswith("spiff"):
             # ... existing code ...
             if record.name == "spiff.task":
-                properties = [
-                    "workflow_spec",
-                    "task_spec",
-                    "task_id",
-                    "task_type",
-                    "state",
-                    "last_state_change",
-                    "elapsed",
-                    "parent",
-                ]
+                properties = self.TASK_PROPERTIES
             elif record.name == "spiff.workflow":
-                properties = ["workflow_spec", "completed", "success"]
+                properties = self.WORKFLOW_PROPERTIES
             elif record.name == "spiff.event":
-                properties = ["bpmn_name", "milestone", "task_id", "task_spec", "metadata", "error_info"]
+                properties = self.EVENT_PROPERTIES
spiffworkflow-backend/src/spiffworkflow_backend/services/process_model_service.py (1)

131-146: Consider tracking these schema suggestions in a GitHub issue.

The comments outline a good suggestion for caching metadata extraction paths in database tables. This would improve performance by reducing git operations.

Would you like me to create a GitHub issue to track this database schema enhancement suggestion?

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 005de8a and 5f89908.

📒 Files selected for processing (8)
  • spiffworkflow-backend/src/spiffworkflow_backend/config/default.py (1 hunks)
  • spiffworkflow-backend/src/spiffworkflow_backend/routes/process_instances_controller.py (1 hunks)
  • spiffworkflow-backend/src/spiffworkflow_backend/services/logging_service.py (3 hunks)
  • spiffworkflow-backend/src/spiffworkflow_backend/services/process_instance_processor.py (3 hunks)
  • spiffworkflow-backend/src/spiffworkflow_backend/services/process_instance_tmp_service.py (4 hunks)
  • spiffworkflow-backend/src/spiffworkflow_backend/services/process_model_service.py (1 hunks)
  • spiffworkflow-backend/src/spiffworkflow_backend/services/task_service.py (1 hunks)
  • spiffworkflow-backend/src/spiffworkflow_backend/services/workflow_execution_service.py (2 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (13)
  • GitHub Check: typeguard 3.12 / ubuntu-latest sqlite
  • GitHub Check: typeguard 3.11 / ubuntu-latest sqlite
  • GitHub Check: tests 3.11 / macos-latest sqlite
  • GitHub Check: tests 3.10 / ubuntu-latest sqlite
  • GitHub Check: tests 3.12 / ubuntu-latest sqlite
  • GitHub Check: tests 3.11 / ubuntu-latest sqlite
  • GitHub Check: tests 3.12 / ubuntu-latest postgres
  • GitHub Check: tests 3.11 / ubuntu-latest postgres
  • GitHub Check: tests 3.12 / ubuntu-latest mysql
  • GitHub Check: tests 3.11 / ubuntu-latest mysql
  • GitHub Check: check_docker_start_script
  • GitHub Check: build
  • GitHub Check: pixeebot[bot]
🔇 Additional comments (17)
spiffworkflow-backend/src/spiffworkflow_backend/services/process_instance_tmp_service.py (4)

36-36: LGTM! Good addition of the log_event parameter.

The new parameter provides flexibility to control logging behavior, which is particularly useful when logging needs to be handled elsewhere.


52-53: Initialize log_extras with task_id early for consistent logging.

Good practice to initialize the dictionary early with the base information that will be present in all logs.


89-94: Comprehensive error information capture.

The error information capture is thorough and includes all relevant details (trace, line number, offset, content) that would be helpful for debugging.


99-101: Well-structured conditional logging.

The conditional logging with the comment explaining why some events need to be logged elsewhere is clear and maintainable.

spiffworkflow-backend/src/spiffworkflow_backend/services/logging_service.py (2)

39-46: Well-structured event format with version control.

The new JSON structure follows good practices:

  • Includes version for future compatibility
  • Uses standardized fields (type, id, source, timestamp)
  • Cleanly separates metadata from data

298-302: Clean and simplified logging interface.

The LoggingService class has been simplified with a cleaner interface that accepts a message and optional extras. This makes the service more flexible and easier to use.

spiffworkflow-backend/src/spiffworkflow_backend/config/default.py (1)

171-171: LGTM! Good addition of event stream source configuration.

The new configuration variable follows the existing pattern and provides a sensible default value.

spiffworkflow-backend/src/spiffworkflow_backend/services/workflow_execution_service.py (3)

44-44: LGTM! Well-organized imports.

The new imports are correctly placed and maintain the file's organization.

Also applies to: 47-47


332-343: Well-structured logging with comprehensive metadata.

The logging implementation captures all relevant task information and metadata in a clean, organized manner.


349-349: Good integration of milestone tracking with logging.

The milestone handling is properly integrated with the logging system and maintains the process instance state correctly.

Also applies to: 356-357

spiffworkflow-backend/src/spiffworkflow_backend/routes/process_instances_controller.py (1)

73-78: LGTM! Enhanced logging with structured context.

The changes improve logging by adding structured context including milestone, process model identifier, and process instance id. This makes the logs more consistent and easier to query.

spiffworkflow-backend/src/spiffworkflow_backend/services/process_model_service.py (1)

147-167: LGTM! Robust metadata extraction implementation.

The implementation:

  • Handles missing/empty extraction paths gracefully
  • Uses path segments to traverse nested data structures
  • Returns empty dict if no extraction paths are configured
  • Safely handles missing data paths
spiffworkflow-backend/src/spiffworkflow_backend/services/task_service.py (1)

249-249: LGTM! Shifted logging responsibility to execution service.

The change helps avoid duplicate logging by deferring the logging responsibility to the execution service.

spiffworkflow-backend/src/spiffworkflow_backend/services/process_instance_processor.py (4)

996-1000: LGTM! Clean metadata extraction implementation.

The implementation cleanly delegates to ProcessModelService, maintaining separation of concerns.


Line range hint 1002-1014: LGTM! Robust metadata storage implementation.

The implementation:

  • Handles both new and existing metadata records
  • Properly truncates values to fit database constraints
  • Uses appropriate database operations

1170-1176: LGTM! Enhanced process completion logging.

The implementation adds valuable context to process completion logs including milestone and metadata information.


1821-1829: LGTM! Enhanced task completion logging.

The implementation adds comprehensive context to task completion logs including task details and metadata.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
spiffworkflow-backend/src/spiffworkflow_backend/services/logging_service.py (1)

96-101: Consider extracting property handling logic

The attribute processing logic could be more maintainable. Consider extracting it into a dedicated method.

-            for attr in properties:
-                if hasattr(record, attr):
-                    data[attr] = getattr(record, attr)
-                    if not (data[attr] is None or isinstance(data[attr], dict)):
-                        data[attr] = str(data[attr])
-                record._spiff_data = data
+            def process_attributes(record: Any, properties: list[str], data: dict) -> dict:
+                for attr in properties:
+                    if hasattr(record, attr):
+                        value = getattr(record, attr)
+                        if not (value is None or isinstance(value, dict)):
+                            value = str(value)
+                        data[attr] = value
+                return data
+            
+            record._spiff_data = process_attributes(record, properties, data)
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5f89908 and 283db6a.

📒 Files selected for processing (1)
  • spiffworkflow-backend/src/spiffworkflow_backend/services/logging_service.py (3 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (12)
  • GitHub Check: typeguard 3.11 / ubuntu-latest sqlite
  • GitHub Check: tests 3.11 / macos-latest sqlite
  • GitHub Check: tests 3.10 / ubuntu-latest sqlite
  • GitHub Check: tests 3.12 / ubuntu-latest sqlite
  • GitHub Check: tests 3.11 / ubuntu-latest sqlite
  • GitHub Check: tests 3.12 / ubuntu-latest postgres
  • GitHub Check: tests 3.11 / ubuntu-latest postgres
  • GitHub Check: tests 3.11 / ubuntu-latest mysql
  • GitHub Check: mypy 3.12 / ubuntu-latest
  • GitHub Check: safety 3.12 / ubuntu-latest
  • GitHub Check: check_docker_start_script
  • GitHub Check: build
🔇 Additional comments (3)
spiffworkflow-backend/src/spiffworkflow_backend/services/logging_service.py (3)

42-46: Well-structured event format with improved traceability!

The new event format follows standard event streaming patterns and improves traceability with:

  • Flatter structure for easier consumption
  • UUID for event correlation
  • Configurable source
  • Standard timestamp field

300-304: Verify all callers of log_event are updated

The simplified log_event method signature is a good improvement, but we need to ensure all existing callers are updated to use the new format.

Let's verify the usage:

✅ Verification successful

All log_event calls are correctly using the new format

All calls to LoggingService.log_event consistently use the new signature with an event type string and a properly constructed log_extras dictionary. No instances of outdated usage were found.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for all calls to log_event to ensure they've been updated
echo "Searching for LoggingService.log_event usage..."
rg "LoggingService\.log_event" -B 2 -A 2

# Also check for potential missed imports
echo "Checking for logging service imports..."
rg "from.*logging_service.*import" -B 1 -A 1

Length of output: 6437


Script:

#!/bin/bash
# Get more context around the log_event calls to see the parameters
echo "Checking log_event call parameters..."
rg "LoggingService\.log_event\(" -B 5 -A 1

Length of output: 5030


80-95: Verify event type property mappings

The specialized handling for different event types (task, workflow, event) looks comprehensive. However, we should verify that all necessary properties are being captured for each type.

Let's verify the property coverage:

✅ Verification successful

Property mappings for event types are correctly implemented

The specialized handling for different event types (task, workflow, event) in the logging service is comprehensive and properly implemented. The test coverage confirms that:

  1. All defined properties are actively used in workflow execution
  2. Properties are correctly captured for different event types
  3. The logging system successfully tracks both detailed and simple logging modes

The property mappings align with the actual usage patterns in the codebase and provide complete coverage of necessary workflow state information.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for usage of these event types to ensure we're capturing all relevant properties
echo "Searching for spiff.task usage..."
rg "spiff\.task" -B 2 -A 2

echo "Searching for spiff.workflow usage..."
rg "spiff\.workflow" -B 2 -A 2

echo "Searching for spiff.event usage..."
rg "spiff\.event" -B 2 -A 2

Length of output: 2629


Script:

#!/bin/bash
# Search for logger calls and property usage
echo "Searching for logger calls..."
rg "\.info\(|\.debug\(|\.warning\(|\.error\(" -B 3 -A 1 "src/spiffworkflow_backend"

echo "Searching for test files with spiff logging..."
fd "test.*\.py" | xargs rg "spiff\.(task|workflow|event)"

echo "Searching for property usage..."
rg "workflow_spec|task_spec|task_id|task_type|state|last_state_change|elapsed|parent|completed|success|bpmn_name|milestone|metadata|error_info" -B 2 -A 2 "src/spiffworkflow_backend"

Length of output: 702


Script:

#!/bin/bash
# First, let's find the correct paths
echo "Finding repository structure..."
fd -t d spiffworkflow_backend

echo "Finding all Python files..."
fd -e py

echo "Now search with correct paths..."
rg "\.info\(|\.debug\(|\.warning\(|\.error\(" -B 3 -A 1

echo "Searching for test files with spiff logging..."
fd -e py | xargs rg "spiff\.(task|workflow|event)" -B 2 -A 2

echo "Searching for property usage..."
rg "workflow_spec|task_spec|task_id|task_type|state|last_state_change|elapsed|parent|completed|success|bpmn_name|milestone|metadata|error_info" -B 2 -A 2

Length of output: 66422


Script:

#!/bin/bash
# Search for SpiffWorkflow related code and property assignments
echo "Searching for SpiffWorkflow task/workflow related code..."
rg "SpiffTask|SpiffWorkflow" -B 2 -A 2

echo "Searching for property assignments in logging context..."
rg "collect_log_extras|log_extras" -B 2 -A 2

echo "Searching for test files with logging assertions..."
fd "test_.*\.py" | xargs rg "spiff\.(task|workflow|event).*(workflow_spec|task_spec|task_id|task_type|state|last_state_change|elapsed|parent|completed|success|bpmn_name|milestone|metadata|error_info)"

Length of output: 69774


Script:

#!/bin/bash
# Search specifically in logging_service.py and related test files
echo "Searching logging service implementation..."
rg -B 5 -A 5 "def filter" spiffworkflow-backend/src/spiffworkflow_backend/services/logging_service.py

echo "Searching for test files related to logging..."
rg -l "test.*logging" spiffworkflow-backend/tests/spiffworkflow_backend/

echo "Searching for property assignments in logging tests..."
fd "test.*logging.*\.py" | xargs rg "assert.*spiff\.(task|workflow|event)"

Length of output: 1168


Script:

#!/bin/bash
# Look at the logging service test file content
echo "Examining logging service test file..."
cat spiffworkflow-backend/tests/spiffworkflow_backend/integration/test_logging_service.py

echo "Searching for actual log messages in tests..."
rg "spiff\.(task|workflow|event)" spiffworkflow-backend/tests/spiffworkflow_backend/integration/test_logging_service.py -B 2 -A 2

Length of output: 5787

@burnettk burnettk merged commit 092fe39 into main Jan 9, 2025
21 checks passed
@burnettk burnettk deleted the feature/improved-event-logging branch January 9, 2025 19:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants