Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(deps): update ghcr.io/streamingfast/firehose-ethereum docker tag to v2.9.3 #453

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

graphops-renovate[bot]
Copy link
Contributor

@graphops-renovate graphops-renovate bot commented Jan 9, 2025

This PR contains the following updates:

Package Update Change
ghcr.io/streamingfast/firehose-ethereum minor v2.6.7-geth-v1.13.15-fh2.4 -> v2.9.3-geth-v1.13.15-fh2.4

Warning

Some dependencies could not be looked up. Check the Dependency Dashboard for more information.


Release Notes

streamingfast/firehose-ethereum (ghcr.io/streamingfast/firehose-ethereum)

v2.9.3

Compare Source

  • Fixed fireeth tools geth enforce-peers --once shorthand flag registration now collapsing with fireeth tools -o (for --output).

    This means the fireeth tools geth enforce-peers command does not accept -o anymore for once and if you were using it, replace with --once.

v2.9.2

Compare Source

  • Fixed substreams-tier2 not setting itself ready correctly on startup since v2.9.0.

  • Added support for --output=bytes mode which prints the chain's specific Protobuf block as bytes, the encoding for the bytes string printed is determined by --bytes-encoding, uses hex by default.

  • Added back -o as shorthand for --output in firecore tools ... sub-commands.

v2.9.1

Compare Source

  • Add back grpc.health.v1.Health service to firehose and substreams-tier1 services (regression in 2.9.0)
  • Give precedence to the tracing header X-Cloud-Trace-Context over Traceparent to prevent user systems' trace IDs from leaking passed a GCP load-balancer

v2.9.0

Compare Source

Reader
  • Reader Node Manager HTTP API now accepts POST http://localhost:10011/v1/restart<?sync=true> to restart the underlying reader node binary sub-process. This is a alias for /v1/reload.
Tools
  • Enhanced fireeth tools print merged-blocks with various small quality of life improvements:
    • Now accepts a block range instead of a single start block.
    • Passing a single block as the block range will print this single block alone.
    • Block range is now optional, defaulting to run until there is no more files to read.
    • It's possible to pass a merged blocks file directly, with or without an optional range.
Firehose

[!IMPORTANT]
This release will reject firehose connections from clients that don't support GZIP or ZSTD compression. Use --firehose-enforce-compression=false to keep previous behavior, then check the logs for incoming Substreams Blocks request logs with the value compressed: false to track users who are not using compressed HTTP connections.

[!IMPORTANT]
This release removes the old sf.firehose.v1 protocol (replaced by sf.firehose.v2 in 2022, this should not affect any reasonably recent client).

  • Add support for ConnectWeb firehose requests.
  • Always use gzip compression on firehose requests for clients that support it (instead of always answering with the same compression as the request).
Substreams
  • The substreams-tier1 app now has two new configuration flags named respectively substreams-tier1-active-requests-soft-limit and substreams-tier1-active-requests-hard-limit
    helping better load balance active requests across a pool of tier1 instances.

    The substreams-tier1-active-requests-soft-limit limits the number of client active requests that a tier1 accepts before starting
    to be report itself as 'unready' within the health check endpoint. A limit of 0 or less means no limit.

    This is useful to load balance active requests more easily across a pool of tier1 instance. When the instance reaches the soft
    limit, it will start to be unready from the load balancer standpoint. The load balancer in return will remove it from the list
    of available instances, and new connections will be routed to remaining clients, spreading the load.

      The `substreams-tier1-active-requests-hard-limit` limits the number of client active requests that a tier1 accepts before
    

    rejecting incoming gRPC requests with 'Unavailable' code and setting itself as unready. A limit of 0 or less means no limit.

    This is useful to prevent the tier1 from being overwhelmed by too many requests, most client auto-reconnects on 'Unavailable' code
    so they should end up on another tier1 instance, assuming you have proper auto-scaling of the number of instances available.

  • The substreams-tier1 app now exposes a new Prometheus metric substreams_tier1_rejected_request_counter that tracks rejected
    requests. The counter is labelled by the gRPC/ConnectRPC returned code (ok and canceled are not considered rejected requests).

  • The substreams-tier2 app now exposes a new Prometheus metric substreams_tier2_rejected_request_counter that tracks rejected
    requests. The counter is labelled by the gRPC/ConnectRPC returned code (ok and canceled are not considered rejected requests).

  • Properly accept and compress responses with gzip for browser HTTP clients using ConnectWeb with Accept-Encoding header

  • Allow setting subscription channel max capacity via SOURCE_CHAN_SIZE env var (default: 100)

v2.8.4

Compare Source

Substreams
  • Fix an issue preventing proper detection of gzip compression when multiple headers are set (ex: python grpc client)
  • Fix an issue preventing some tier2 requests on last-stage from correctly generating stores. This could lead to some missing "backfilling" jobs and slower time to first block on reconnection.
  • Fix a thread leak on cursor resolution resulting in bad counter for active connections
  • Add support for zstd encoding on server

v2.8.3

Compare Source

[!NOTE]
This release will reject connections from clients that don't support GZIP compression. Use --substreams-tier1-enforce-compression=false to keep previous behavior, then check the logs for incoming Substreams Blocks request logs with the value compressed: false to track users who are not using compressed HTTP connections.

  • Fix broken tools poller command in v2.8.2

v2.8.2

Compare Source

[!WARNING]
Do NOT use this version with tools poller, a flag issue prevents the poller from starting up. Recommended that you upgrade to v2.8.3 ASAP

[!NOTE]
This release will reject connections from clients that don't support GZIP compression. Use --substreams-tier1-enforce-compression=false to keep previous behavior, then check the logs for incoming Substreams Blocks request logs with the value compressed: false to track users who are not using compressed HTTP connections.

  • Bump firehose-core to v1.6.8
  • Substreams: add --substreams-tier1-enforce-compression to reject connections from clients that do not support GZIP compression
  • Substreams performance: reduced the number of mallocs (patching some third-party libraries)
  • Substreams performance: removed heavy tracing (that wasn't exposed to the client)
  • Fixed --reader-node-line-buffer-size flag that was not being respected in reader-node-stdin app
  • poller: add --max-block-fetch-duration

v2.8.1

Compare Source

  • firehose-grpc-listen-addr and substreams-tier1-grpc-listen-addr flags now accepts comma-separated addresses (allows listening as plaintext and snakeoil-ssl at the same time or on specific ip addresses)
  • rpc-poller: fix fetching the first block on an endpoint (was not following the cursor, failing unnecessarily on non-archive nodes)

v2.8.0

Compare Source

  • Adding requests_hash which was added by EIP-7685
  • Adding nil safety check on the CombinedFilter and looping over the transaction_trace receipts
  • Bump substreams and dmetering to latest version adding the outputModuleHash to metering sender.

v2.7.5

Compare Source

Substreams fixes

Note All caches for stores using the updatePolicy set_sum (added in substreams v1.7.0) and modules that depend on them will need to be deleted, since they may contain bad data.

  • Fix bad data in stores using set_sum policy: squashing of store segments incorrectly "summed" some values that should have been "set" if the last event for a key on this segment was a "sum"
  • Fix small bug making some requests in development-mode slow to start (when starting close to the module initialBlock with a store that doesn't start on a boundary)

v2.7.4

Compare Source

Substreams fixes
  • Fixed an(other) issue where multiple stores running on the same stage with different initialBlocks will fail to proress (and hang)

v2.7.3

Compare Source

Substreams fixes
  • Fix bug where some invalid cursors may be sent (with 'LIB' being above the block being sent) and add safeguard/loggin if the bug appears again
  • Fix panic in the whole tier2 process when stores go above the size limit while being read from "kvops" cached changes
  • Fix "cannot resolve 'old cursor' from files in passthrough mode" error on some requests with an old cursor
  • Fix handling of 'special case' substreams module with only "params" as its input: should not skip this execution (used in graph-node for head tracking)
    -> empty files in module cache with hash d3b1920483180cbcd2fd10abcabbee431146f4c8 should be deleted for consistency

v2.7.2

Compare Source

Core
  • [Operator] The flag --advertise-block-id-encoding now accepts shorter form: hex, base64, etc. The older longer form BLOCK_ID_ENCODING_HEX is still supported but we suggested using the shorter form from now on.
Substreams v1.10.2

Note Since a bug that affected substreams with "skipping blocks" was corrected in this release, any previously produced substreams cache should be considered as possibly corrupted and be eventually replaced

  • Substreams: fix bad handling of modules with multiple inputs when only one of them is filtered, resulting in bad outputs in production-mode.
  • Substreams: fix stalling on some substreams with stores and mappers with different start block numbers on the same stage
  • Substreams: fix 'development mode' and LIVE mode executing some modules that should be skipped

v2.7.1

Compare Source

  • Bump substreams to v1.10.0
  • Bump firehose-core to v1.6.1

v2.7.0

Compare Source

  • Add sf.firehose.v2.EndpointInfo/Info service on Firehose and sf.substreams.rpc.v2.EndpointInfo/Info to Substreams endpoints. This involves the following new flags:

    • advertise-chain-name Canonical name of the chain according to https://thegraph.com/docs/en/developing/supported-networks/ (required, unless it is in the "well-known" list)
    • advertise-chain-aliases Alternate names for that chain (optional)
    • advertise-block-features List of features describing the blocks (optional)
    • ignore-advertise-validation Runtime checks of chain name/features/encoding against the genesis block will no longer cause server to wait or fail.
  • Add a well-known list of chains (hard-coded in wellknown/chains.go to help automatically determine the 'advertise' flag values). Users are encouraged to propose Pull Requests to add more chains to the list.

  • The new info endpoint adds a mandatory fetching of the first streamable block on startup, with a failure if no block can be fetched after 3 minutes and you are running firehose or substreams-tier1 service.
    It validates the following on a well-known chain:

    • if the first-streamable-block Num/ID match the genesis block of a known chain, e.g. matic, it will refuse another value for advertise-chain-name than matic or one of its aliases (polygon)
    • If the first-streamable-block does not match any known chain, it will require the advertise-chain-name to be non-empty
  • Substreams: add --common-tmp-dir flag to activate local caching of pre-compiled WASM modules through wazero v1.8.0 feature (performance improvement on WASM compilation)

  • Substreams: revert module hash calculation from v2.6.5, when using a non-zero firstStreamableBlock. Hashes will now be the same even if the chain's first streamable block affects the initialBlock of a module.

  • Substreams: add --substreams-block-execution-timeout flag (default 3 minutes) to prevent requests stalling. Timeout errors are returned to the client who can decide to retry.


Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR has been generated by Renovate Bot.

Copy link

coderabbitai bot commented Jan 9, 2025

Important

Review skipped

Ignore keyword(s) in the title.

⛔ Ignored keywords (1)
  • feat(deps)

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@graphops-renovate graphops-renovate bot force-pushed the renovate/ghcr.io-streamingfast-firehose-ethereum-2.x branch from e3f0cb8 to a415862 Compare January 10, 2025 16:05
@graphops-renovate graphops-renovate bot changed the title feat(deps): update ghcr.io/streamingfast/firehose-ethereum docker tag to v2.8.3 feat(deps): update ghcr.io/streamingfast/firehose-ethereum docker tag to v2.8.4 Jan 10, 2025
@graphops-renovate graphops-renovate bot force-pushed the renovate/ghcr.io-streamingfast-firehose-ethereum-2.x branch from a415862 to e996ca0 Compare January 16, 2025 22:05
@graphops-renovate graphops-renovate bot changed the title feat(deps): update ghcr.io/streamingfast/firehose-ethereum docker tag to v2.8.4 feat(deps): update ghcr.io/streamingfast/firehose-ethereum docker tag to v2.9.0 Jan 16, 2025
@graphops-renovate graphops-renovate bot force-pushed the renovate/ghcr.io-streamingfast-firehose-ethereum-2.x branch from e996ca0 to 48b45a9 Compare January 20, 2025 19:35
@graphops-renovate graphops-renovate bot changed the title feat(deps): update ghcr.io/streamingfast/firehose-ethereum docker tag to v2.9.0 feat(deps): update ghcr.io/streamingfast/firehose-ethereum docker tag to v2.9.1 Jan 20, 2025
… to v2.9.3

| datasource | package                                 | from   | to     |
| ---------- | --------------------------------------- | ------ | ------ |
| docker     | ghcr.io/streamingfast/firehose-ethereum | v2.6.7 | v2.9.3 |
@graphops-renovate graphops-renovate bot force-pushed the renovate/ghcr.io-streamingfast-firehose-ethereum-2.x branch from 48b45a9 to 226c196 Compare January 21, 2025 20:31
@graphops-renovate graphops-renovate bot changed the title feat(deps): update ghcr.io/streamingfast/firehose-ethereum docker tag to v2.9.1 feat(deps): update ghcr.io/streamingfast/firehose-ethereum docker tag to v2.9.3 Jan 21, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0 participants