Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

telemetry: slog sender with block chunking #10710

Open
wants to merge 13 commits into
base: master
Choose a base branch
from

Conversation

usmanmani1122
Copy link
Contributor

@usmanmani1122 usmanmani1122 commented Dec 17, 2024

closes: #10779
refs: #xxxx

Description

This PR adds a slog sender with block level slog files. There is also builtin compression support. The slog files generated will be of the following format:

slogfile_{identifier}_{TIMESTAMP}.gz

Where identifier could be on of the following:

  • init
  • bootstrap
  • upgrade
  • block_${blockHeight}

Security Considerations

None

Scaling Considerations

None

Documentation Considerations

None

Testing Considerations

None

Upgrade Considerations

None

@usmanmani1122 usmanmani1122 self-assigned this Dec 17, 2024
@mhofman mhofman self-requested a review January 4, 2025 02:57
Copy link

cloudflare-workers-and-pages bot commented Jan 21, 2025

Deploying agoric-sdk with  Cloudflare Pages  Cloudflare Pages

Latest commit: 6e1ebde
Status: ✅  Deploy successful!
Preview URL: https://5bd8a177.agoric-sdk.pages.dev
Branch Preview URL: https://usman-block-slogger.agoric-sdk.pages.dev

View logs

Copy link
Member

@mhofman mhofman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's sync up on how to handle file streams and the intrinsic nature of this operation.


const stream = handle
? createWriteStream(noPath, { fd: handle.fd })
? createWriteStream(noPath, { autoClose: false, fd: handle.fd })
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why set autoClose: false ? Afaik we don't sufficiently monitor stream error/finish to correctly call close on it.

stream.close?.();

if (handle) {
await new Promise(resolve => stream.end(() => resolve(null)));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Switching from close to end with autoClose: false will result in the stream not getting destroyed. Is that intended? I suppose we explicitly close the handle below.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure what motivates the changes here, but if we're gonna change this, I'm wondering if it might not be better to use the "new" filehandle.createWriteStream API that exists since v16.11.

Also depending on the reasoning for no longer auto-closing the handle, we likely want to to use the new flush option.

*/
export const makeSlogSender = async options => {
const { CHAIN_ID, CONTEXTUAL_BLOCK_SLOGS } = options.env || {};
if (!(options.stateDir || CONTEXTUAL_BLOCK_SLOGS))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If both are needed, this condition isn't correct

Suggested change
if (!(options.stateDir || CONTEXTUAL_BLOCK_SLOGS))
if (!options.stateDir || !CONTEXTUAL_BLOCK_SLOGS)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, atleast one of them is needed

{
'chain-id': CHAIN_ID,
},
persistenceUtils,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Were we not passing the persistence utils for the file version? I totally missed that. Let's extract this fix in a separate PR.

Comment on lines 33 to 36
const contextualSlogProcessor = makeContextualSlogProcessor(
{ 'chain-id': CHAIN_ID },
persistenceUtils,
);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are we using a contextual slog? I'm not sure we care about contextualizing, as the goal of this tool is mostly for archiving, not as much for querying. Most tools we have currently work against original slogs events.

/**
* @param {import('./context-aware-slog.js').Slog} slog
*/
const slogSender = async slog => {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Slog senders cannot be async. Please use an internal queue if an async implementation is actually needed (and make sure that forceFlush respects this queue. The flight recorder has a trivial example

I'm also not a fan of all the console.error logging on errors. I think we need to let errors go back up to the result promise, and for flush to be able to report an aggregation of all errors. An error happening for one event should not prevent us from attempting to write another event.

@usmanmani1122
Copy link
Contributor Author

usmanmani1122 commented Feb 10, 2025

@mhofman I have reverted other changes. I will fix the absent persistenceUtils in the context-aware-slog-file slogger separately. I have also made the slogSender synchronous. File also uses createWriteStream now. Please give it another look. I will add the compression support meanwhile.

@usmanmani1122 usmanmani1122 changed the title Testing - Block Slogs telemetry: slog sender with block chunking Feb 12, 2025
@usmanmani1122 usmanmani1122 marked this pull request as ready for review February 12, 2025 05:35
@usmanmani1122 usmanmani1122 requested a review from a team as a code owner February 12, 2025 05:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

slog sender with block chunking
2 participants