-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: implement search_transactions_before and search_transactions_after #13621
base: main
Are you sure you want to change the base?
feat: implement search_transactions_before and search_transactions_after #13621
Conversation
@mattsse I have some questions. In the issue you wrote that we should enforce a page size limit like 100 blocks. However, according to the documentation the page size argument is not blocks but the number of transactions, so we can put a limit to that. One problem with my current implementation is that it processes blocks one by one until all the traces are fetched because we don't know in advance in which block we are going to reach the I will also look into your suggestion to use |
…plement-search-transactions-before-and-after
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cool, this is great start
I left a few questions because the page settings are a bit confusing to me
page size argument is not blocks but the number of transactions
I see, then we should perhaps try to perform tracing of multiple blocks in parallel by spawning jobs?
if the user e.g. request the search beginning from the genesis block and the searched addresses appear far later, the search can take hours.
yeah we can definitely look into processing block tracing in parallel after we have the initial draft
let mut txs_with_receipts = TransactionsWithReceipts { | ||
txs: Vec::default(), | ||
receipts: Vec::default(), | ||
first_page: false, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
unclear what first_page means
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
First page means it's the page with the most recent transactions meaning we have traced until the tip of the chain.
from_block: None, | ||
to_block: None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is where we'd need to configure the block_number and perhaps the page_size?
because worst case this would trace the entire chain
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So do you think we should put a limit to the number of blocks traced, or to the number of transactions? If we limit the number of transaction, there's still the chance to trace the entire chain e.g. if the account has 5 transactions in total and the user requests 10.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
unfortunately, we don't have another way to determine this so this is will always be possible, so we should doing some chunking instead so we limit how many blocks we trace at once, so something like a https://docs.rs/futures/latest/futures/stream/struct.FuturesUnordered.html with limited capacity where we push new block tasks
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What you mean is processing blocks as batches of something like 100 or 1000 right? E.g. processing 1000 blocks, waiting until they're all complete, and continue with another 1000 blocks if we didn't reach the page size yet. In that case using try_join_all
looks like a better idea since we have to wait for all 1000 blocks to complete anyways. It's also how it's done in trace_filter
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On second thought, FuturesUnordered
will be useful
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay I changed the implementation with FuturesUnordered
and it works much faster now. I used 1000 block batches for now but we can change that. I think we can also put a limit to page_size
to prevent requesting unreasonable number of transactions. What do you think would be a good number for that?
…plement-search-transactions-before-and-after
Will close #13499 later.
For now, this PR has an implementation for
search_transactions_after
. I will add the implementation for search_transactions_before after getting feedback.