-
Notifications
You must be signed in to change notification settings - Fork 867
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Precompile Caching MVP #8095
base: main
Are you sure you want to change the base?
Precompile Caching MVP #8095
Conversation
Signed-off-by: garyschulte <[email protected]>
Signed-off-by: garyschulte <[email protected]>
Signed-off-by: garyschulte <[email protected]>
Signed-off-by: garyschulte <[email protected]>
Signed-off-by: garyschulte <[email protected]>
Signed-off-by: garyschulte <[email protected]>
49dd4dc
to
e9155f3
Compare
Signed-off-by: garyschulte <[email protected]>
0d95b1d
to
67f912f
Compare
Signed-off-by: garyschulte <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it makes sens to have a cache per precompile as we discussed. Also you need to change the key to use a hashing function that has no collisions, as the hashcode method that returns int doesn't can have collisions
PrecompileInputResultTuple res; | ||
|
||
if (enableResultCaching) { | ||
res = bnAddCache.getIfPresent(input.hashCode()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As discussed before, hashCode is not a good key here as you may have collisions. We need to use either the whole input or a hashing function that assumes no collisions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was expecting and embracing collisions. I think a cache limited to <=1000 items is unlikely to have collisions in the int space unless it is an intentional attempt to create collisions. In that case, the equals() check will prevent using the cache and it will be overwritten with the new computed value.
If we use input Bytes, it seems we can attack the cache by filling it with hashCode collisions and cause a worst case performance where we have to use byte-by-byte equals() method to distinguish between the hashCode collisions for every item in the cache.
For that reason I think leaning into a 1 cache item per hash code is a safer strategy. It will work well with few naturally occurring collisions, and won't result in the ability to attack the cache to cause slow blocks.
@@ -75,6 +76,13 @@ enum Benchmark { | |||
negatable = true) | |||
Boolean nativeCode; | |||
|
|||
@Option( | |||
names = {"--use-precompile-cache"}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice that you included enabling the feature with a flag
@@ -40,6 +42,8 @@ public class AltBN128MulPrecompiledContract extends AbstractAltBnPrecompiledCont | |||
|
|||
private static final Bytes POINT_AT_INFINITY = Bytes.repeat((byte) 0, 64); | |||
private final long gasCost; | |||
private static final Cache<Integer, PrecompileInputResultTuple> bnMulCache = | |||
Caffeine.newBuilder().maximumSize(1000).build(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not really urgent or mandatory but it would be nice to add metrics for it to have the cache hit ratio. Check :
Line 43 in 8cddcfd
private final Cache<Bytes, Bytes> accountNodes = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good idea. I will add look at adding an all-precompile 'cache hit' counter.
PR description
PR adds precompile caching behavior for an MVP set of precompiles that are costly enough to benefit from caching. Provision is added to disable caching via command line arg (for gas costing reasons), but it is enabled by default in besu, and disabled by default in evmtool and benchmark subcommand.
Changes:
MVP precompiles include:
Feedback welcome on the design choices:
Parallel transaction execution should benefit from precompile caching when state conflicts are detected. Attached are preliminary results from the nethermind gas-benchmarks suite which indicate performance does not seem to take a hit for cache checking and misses, and the caching itself is effective for repetitive/identical inputs
blake2.pdf
ecmul.pdf
ecrec.pdf
Fixed Issue(s)
Thanks for sending a pull request! Have you done the following?
doc-change-required
label to this PR if updates are required.Locally, you can run these tests to catch failures early:
./gradlew build
./gradlew acceptanceTest
./gradlew integrationTest
./gradlew ethereum:referenceTests:referenceTests