-
Notifications
You must be signed in to change notification settings - Fork 170
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Filecoin HAMT v3 Improvements #38
Comments
Great thank you. I support all of these changes:
|
Specifically, 4 doesn't require a FIP but using it may (depending on how HAMT caching currently behaves, there may be no behavior changes from the VMs perspective. |
I've taken (4) out of the FIP. However I am planning on adding a refinement to the cache flushing behavior specified in (1) that naturally followed from refactoring the internal go implementation to achieve 4. See comment here |
After more consideration I'm also removing (3) from the FIP. |
I'm moving forward with the suggestion from @austinabell to add (3) fix AMT caching in the same way (1) fixes HAMT caching |
type node struct { |
type Root struct { |
Is there an estimation regarding how much bandwidth efficiency can be improved, or how much gas can be saved generally? |
I am not aware of estimations of saved bandwidth from the caching fixes or serialization changes. Since they are unambiguous improvements with no risk of regressing performance no-one has yet taken the time to figure this out. But the serialization changes in particular should be straightforward to estimate. A related effort that I have more information about is the H/AMT branching factor tuning that is also landing in the upcoming network upgrade. We measured the runtime of relevant H/AMT operations by constructing datastructures with representative entry counts, datasizes and sparsity and then measuring the gas of ipld state get/puts and benchmarking cpu time. By sweeping these measurements over different branching factors we found tunings that reduce expected runtime. I only have estimations of the runtime reductions in the H/AMT operations themselves, not the impact these reductions have on the calling on chain message runtime as a whole. To find the total message time reduction you would need to measure the fraction of each message time spent in the relevant H/AMT operations and then scale down that fraction by the runtime decreases found in the table below. Below are the H/AMT operation runtime reduction estimations. HAMT Improvements
AMT Improvements
This information is quite raw and the notation is probably confusing so please follow up with clarifying questions if you are interested in looking into this further. |
@steven004 I've just remembered that I have some graphs which are much easier to interpret and help answer your original question. I generated data using a modified version of this test which runs an actor state simulation using 10 miners and 9 deal clients. I modified the test to run for 50k epochs. This simulation can't capture all the complexities of mainnet state but we think the trends on display here are representative of what we will see in the mainnet v3 upgrade. Also note that the following data only covers improvements in gas from state reading reductions, whereas the full picture also includes cpu execution time improvements. Its hard to see but Current Mainnet and v3 H/Amt are superimposed on each other which demonstrates that the HAMT serialization savings are not significanlty impacting state size. However the tuning changes reduce total state size by ~15% by epoch 50k. Similarly the number of Puts/1000 epochs is not significantly impacted by serialization and caching changes but it is reduced by ~9% with tuning by epoch 50k. The biggest source of Put reductions appears to come from a reduction in Sectors AMT Puts due to tuning in cron initiated Note we also see a reduction in Gets but I haven't remeasured since off chain window post landed in v3 which reduced much of the impact since SubmitWindowPoSt was by far the method with the most tuning-based reduciton in Gets/1000 epochs. |
@ZenGround0 , appreciate for your sharing all the insights. It helps a lot for understanding the improvements. Along with wdPoSt off-chain verification improvement together, perhaps we can expect about 20% reduction of total gas used if keeping the current network throughput, or see 20% higher throughput capacity when v10 network takes effect. |
Maybe 20-30%?If all miners retain 10-20% of the sealing, the gas will drop to a lower level.What do you think? @steven004 |
I wouldn't try to speculate too much; the sealing rate could very well increase to take up the additional chain bandwidth. The long-term solution is something like #72.
|
@ZenGround0 I think we can close this as shipped. |
I'm opening this issue as the official discussion thread for the HAMT improvement FIP I'm currently drafting. There are several popular outstanding breaking changes many of which are already implemented in the go filecoin hamt that would improve the protocol in terms of performance, simplicity and safety. Since each change is small on its own I am bundling them all into one FIP to reduce overhead. However each change can be considered separately and if there is a strong reason to exclude one of the four I plan to do this while the FIP is in draft stage.
HAMT node bitfield is not simple and makes canonical block form validation difficult. Issue @rvagg. Golang Fix @rvagg. Breaks serialization of HAMT bitfields/nodes and will require migration of all state tree HAMTsHAMT Set does not provide indication of what value / whether any value existed for the key in question. This functionality is motivated concretely by safety checks in the miner actor. Issue @anorth. No implementation up yet but one appealing proposal is to add to the interface methodSetIfAbsent
which only writes the key if not previously present returning a boolean indicating set/no set.Meta note: bundling changes into one FIP like this is an experiment that I don't believe has been tried before. Feel free to critique the bundling as well as the HAMT issues in this thread if you see problems.
The text was updated successfully, but these errors were encountered: