Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Precompile Caching MVP #8095

Open
wants to merge 8 commits into
base: main
Choose a base branch
from

Conversation

garyschulte
Copy link
Contributor

@garyschulte garyschulte commented Jan 9, 2025

PR description

PR adds precompile caching behavior for an MVP set of precompiles that are costly enough to benefit from caching. Provision is added to disable caching via command line arg (for gas costing reasons), but it is enabled by default in besu, and disabled by default in evmtool and benchmark subcommand.

Changes:

  • add a static member and setter in AbstractPrecompiledContract used to control whether we want to cache results
  • add precompile-specific LRU caches with rational size limits in each MVP precompile
  • add a cli arg for precompile caching, defaulted to true

MVP precompiles include:

  • altbn128/bn254 precompiles for add, mul and pairing
  • ecrecover precompile
  • blake2 precompile

Feedback welcome on the design choices:

  • one cache per precompile contract (since each will have different input and output size characteristics)
  • cache is <hashCode, input_and_result_tuple> in order to verify input is truly identical rather than just matching by hashCode (it is trivial to construct requests that have different inputs, but similar Bytes hashCode)

Parallel transaction execution should benefit from precompile caching when state conflicts are detected. Attached are preliminary results from the nethermind gas-benchmarks suite which indicate performance does not seem to take a hit for cache checking and misses, and the caching itself is effective for repetitive/identical inputs

blake2.pdf
ecmul.pdf
ecrec.pdf

Fixed Issue(s)

Thanks for sending a pull request! Have you done the following?

  • Checked out our contribution guidelines?
  • Considered documentation and added the doc-change-required label to this PR if updates are required.
  • Considered the changelog and included an update if required.
  • For database changes (e.g. KeyValueSegmentIdentifier) considered compatibility and performed forwards and backwards compatibility tests

Locally, you can run these tests to catch failures early:

  • unit tests: ./gradlew build
  • acceptance tests: ./gradlew acceptanceTest
  • integration tests: ./gradlew integrationTest
  • reference tests: ./gradlew ethereum:referenceTests:referenceTests

@garyschulte garyschulte force-pushed the feature/precompile-caching-part1 branch from 49dd4dc to e9155f3 Compare January 10, 2025 00:27
@garyschulte garyschulte changed the title Precompile caching part1 Precompile caching MVP Jan 10, 2025
@garyschulte garyschulte changed the title Precompile caching MVP Precompile Caching MVP Jan 10, 2025
Signed-off-by: garyschulte <[email protected]>
@garyschulte garyschulte force-pushed the feature/precompile-caching-part1 branch from 0d95b1d to 67f912f Compare January 10, 2025 00:34
Copy link
Contributor

@ahamlat ahamlat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it makes sens to have a cache per precompile as we discussed. Also you need to change the key to use a hashing function that has no collisions, as the hashcode method that returns int doesn't can have collisions

PrecompileInputResultTuple res;

if (enableResultCaching) {
res = bnAddCache.getIfPresent(input.hashCode());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As discussed before, hashCode is not a good key here as you may have collisions. We need to use either the whole input or a hashing function that assumes no collisions.

Copy link
Contributor Author

@garyschulte garyschulte Jan 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was expecting and embracing collisions. I think a cache limited to <=1000 items is unlikely to have collisions in the int space unless it is an intentional attempt to create collisions. In that case, the equals() check will prevent using the cache and it will be overwritten with the new computed value.

If we use input Bytes, it seems we can attack the cache by filling it with hashCode collisions and cause a worst case performance where we have to use byte-by-byte equals() method to distinguish between the hashCode collisions for every item in the cache.

For that reason I think leaning into a 1 cache item per hash code is a safer strategy. It will work well with few naturally occurring collisions, and won't result in the ability to attack the cache to cause slow blocks.

@@ -75,6 +76,13 @@ enum Benchmark {
negatable = true)
Boolean nativeCode;

@Option(
names = {"--use-precompile-cache"},
Copy link
Contributor

@ahamlat ahamlat Jan 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice that you included enabling the feature with a flag

@@ -40,6 +42,8 @@ public class AltBN128MulPrecompiledContract extends AbstractAltBnPrecompiledCont

private static final Bytes POINT_AT_INFINITY = Bytes.repeat((byte) 0, 64);
private final long gasCost;
private static final Cache<Integer, PrecompileInputResultTuple> bnMulCache =
Caffeine.newBuilder().maximumSize(1000).build();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not really urgent or mandatory but it would be nice to add metrics for it to have the cache hit ratio. Check :

Copy link
Contributor Author

@garyschulte garyschulte Jan 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good idea. I will add look at adding an all-precompile 'cache hit' counter.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants