-
Notifications
You must be signed in to change notification settings - Fork 423
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use batched RPC to fetch mempool entries & transactions #979
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
7 seconds? fantastic. I'll try this out with my own configuration in the next few days and let you know how it goes. The code looks good to me.
On my setup where
Subsequent updates typically take 1 to 10 seconds:
This is a HUGE improvement over
And subsequent updates take at least 10 seconds, or longer:
|
entries.len(), | ||
txids_chunk.len() | ||
); | ||
let txs = daemon.get_mempool_transactions(txids_chunk)?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might be wise to chunk the raw transaction fetching depending on the size of each TX.
These days, with inscriptions running around, some TXs might be much larger than others, so one batch could end up fetching several dozen megabytes worth of TX data, while another might only fetch a few hundred kilobytes.
At this point in the code we can use the mempool entries to check how big each transaction is, which we can use to pace the TX fetching more evenly. This way we don't end up with some responses which are too large.
The logic might go something like:
all_raw_txs = []
while len(mempool_entries) > 0:
fetch_bucket = []
expected_response_size = 0
for entry in mempool_entries:
if entry.size + expected_response_size > MAX_RESP_SIZE:
expected_response_size += entry.size
mempool_entries.remove(entry)
fetch_bucket.append(entry.txid)
else:
break
raw_txs = daemon.fetch_txids_batch(fetch_bucket)
all_raw_txs.extend(raw_txs)
@romanz I'd be happy to give this a go if you'd like. But this PR is already a huge improvement over the status quo, so I'd say it's safe to save this idea for a future PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, good idea!
Let's implement it as a future PR.
This PR improves initial mempool sync significantly.
Before the change (efa045c):
After the change (ab12ce7):