Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

llamaindex support #1758

Merged
merged 2 commits into from
Feb 12, 2025
Merged

llamaindex support #1758

merged 2 commits into from
Feb 12, 2025

Conversation

LawyZheng
Copy link
Collaborator

@LawyZheng LawyZheng commented Feb 11, 2025

Important

Add llama-index support with new local and remote task execution capabilities and refactor task schemas.

  • Dependencies:
    • Add llama-index version ^0.12.16 to pyproject.toml.
  • New Features:
    • Add local.py in skyvern/llama_index to support local task execution using FunctionTool for TaskV1Request and TaskV2Request.
    • Add remote.py in skyvern/llama_index to support remote task execution using FunctionTool for RunTaskV1Schema, RunTaskV2Schema, and GetTaskSchema.
  • Refactoring:
    • Move RunTaskV1Schema, RunTaskV2Schema, and GetTaskSchema from skyvern/langchain/remote.py to skyvern/agent/parameter.py.
    • Update skyvern/langchain/remote.py to use the new schemas for task execution.

This description was created by Ellipsis for d0df598. It will automatically update as commits are pushed.

Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 Looks good to me! Reviewed everything up to d0df598 in 2 minutes and 31 seconds

More details
  • Looked at 185 lines of code in 6 files
  • Skipped 1 files when reviewing.
  • Skipped posting 10 drafted comments based on config settings.
1. pyproject.toml:57
  • Draft comment:
    Dependency addition: Verify that the version '^0.12.16' of llama-index is compatible with the rest of the project and will not introduce conflicts.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50%
    This comment is asking the PR author to verify compatibility of a dependency version, which is against the rules. It does not provide a specific suggestion or point out a specific issue with the code.
2. skyvern/agent/parameter.py:7
  • Draft comment:
    Ensure consistency in attribute naming; TaskV1Request uses 'max_steps' while TaskV2Request uses 'max_iterations'. Confirm both are intentional.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.
3. skyvern/langchain/remote.py:12
  • Draft comment:
    Using TaskV1Request.model_validate(task) is appropriate for pydantic v2. Confirm that all incoming data adheres to the expected structure.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50%
    The comment is informative and asks the author to confirm that all incoming data adheres to the expected structure, which violates the rule against asking for confirmation of intention or behavior. It does not provide a specific code suggestion or ask for a specific test to be written.
4. skyvern/llama_index/local.py:7
  • Draft comment:
    Consider replacing the async lambda with a named async function for better debuggability and clearer error handling.
  • Reason this comment was not posted:
    Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 20% vs. threshold = 50%
    While named functions can improve stack traces and debugging, these lambdas are extremely simple pass-through functions. They don't contain any real logic or error handling that would benefit from being broken out. The lambdas make the tool definitions more concise and keep related code together. The suggestion feels like over-engineering for this case.
    Named functions could make stack traces clearer if errors occur. The code might grow more complex in the future where having separate functions would be beneficial.
    The current lambdas are so simple that stack traces would still be clear - errors would likely come from the Agent class methods. Future complexity should be handled when/if it actually occurs.
    Delete this comment as it suggests a refactor that would add complexity without providing meaningful benefits given the current simple pass-through nature of the lambdas.
5. skyvern/llama_index/remote.py:7
  • Draft comment:
    As with the local version, using async lambdas can obscure stack traces. Consider using named functions to improve maintainability.
  • Reason this comment was not posted:
    Marked as duplicate.
6. pyproject.toml:55
  • Draft comment:
    Added dependency for llama-index looks good. Ensure its version (^0.12.16) is fully compatible with the imported modules (e.g. FunctionTool from llama_index.core.tools).
  • Reason this comment was not posted:
    Confidence changes required: 0% <= threshold 50%
    None
7. skyvern/agent/parameter.py:7
  • Draft comment:
    Consider standardizing field names between TaskV1Request (max_steps) and TaskV2Request (max_iterations) if they denote similar limits.
  • Reason this comment was not posted:
    Confidence changes required: 33% <= threshold 50%
    None
8. skyvern/langchain/remote.py:16
  • Draft comment:
    Consider adding error handling around TaskV1Request.model_validate(task) (and similarly for TaskV2) to give clearer feedback on input validation failures.
  • Reason this comment was not posted:
    Confidence changes required: 33% <= threshold 50%
    None
9. skyvern/llama_index/local.py:7
  • Draft comment:
    Agent() is instantiated within the lambda for each call. If Agent holds state or is expensive to create, consider reusing an instance.
  • Reason this comment was not posted:
    Confidence changes required: 33% <= threshold 50%
    None
10. skyvern/llama_index/remote.py:7
  • Draft comment:
    Default endpoint value 'https://api.skyvern.com' is repeated in lambdas. Consider centralizing it to avoid duplication and potential mismatches.
  • Reason this comment was not posted:
    Confidence changes required: 33% <= threshold 50%
    None

Workflow ID: wflow_7jzRmxThgMwDSFQv


You can customize Ellipsis with 👍 / 👎 feedback, review rules, user-specific overrides, quiet mode, and more.

@LawyZheng LawyZheng force-pushed the lawy/llamaindex-support branch from d0df598 to 522c5d7 Compare February 12, 2025 03:16
@LawyZheng LawyZheng merged commit fbf1473 into main Feb 12, 2025
6 checks passed
@LawyZheng LawyZheng deleted the lawy/llamaindex-support branch February 12, 2025 03:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants