-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(gossipsub): bound send-queue #4756
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great start and looks surprisingly simple :)
Two comments from my end. Would be good to get @mxinden and perhaps @AgeManning's input.
I am also not sure about the size of this queue. The previous 16 was a pure optimisation where the first 16 would be stored on the stack. Now it is a hard-limit. However, until we have backpressure all the way to the application, no number is going to be correct here.
Sorry but this doesn't seem like an appropriate solution. Messages dropped can be simple heartbeats or important publish messages. There is of course an understanding of unreliability for gossipsub but simply dropping a message is imo too far. A publish message should return an error that can be managed by the application in order to retry later at a minimum. A better approach (non ideal still) is to at least handle drop policies differentiated by control and payload messages. Note that in any of these cases, dropping messages will likely have an impact on the node's score as seen by other peers |
Sorry to directly ping you @thomaseizinger but I would like to get your opinion on the reservation expressed above.
I understand where the author is coming from here, and I wonder if there might be a solution other than dropping messages. For example here, messages from inbound streams are prioritized before processing the send_queue. I could be viewing it wrongly, but I imagine being stuck in a loop of notifying the behaviour of incoming messages, which in turn generates new response messages. This would grow the send queue while not doing much to deplete it. If the above inference is correct then I think we should either always process the send queue first, or stop processing the inbound-stream if the send-queue is at capacity. Any thoughts on this? |
Good catch @0xcrust. Thank you! This also violates our guideline of prioritizing local work over new work coming from a remote. I assume prioritizing |
I suggest we move the discussion back to the issue. I wrote up a summary and alternative solution proposals in #4667 (comment). @0xcrust I am sorry for the missing guidance. My bad. In case you want to get involved in the discussion, great! In case you want to tackle a different |
Sure I can do that! |
This pull request has merge conflicts. Could you please resolve them @0xcrust? 🙏 |
Closing here in favor of the latest discussions in #4667. |
Description
Resolves #4667.
Notes & open questions
I decided to go with the first suggestion in #4667 as it appears simpler. The other alternative I considered was to communicate limits from handler to behaviour. However I'm skeptical that
Behaviour::send_message
should error if handler limits are met.Change checklist