Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR adds limits for shrinking (to avoid exhaustion on very heavy-processing call sequences).
It does this by introducing a
shrinkLimit
value to our config. The default is 5000. It is probably a bit low for how inaccurate our value shrinking is. But it's safer for now given some projects have complex call sequences where high values could translate to quite a long amount of minutes.This only changes the higher level shrinking loop. I added lots of comments to
fuzzer_worker.go
to explain it. Shrinking happens in two steps:2*len(callSequence)
of yourshrinkLimit
to first greedily remove calls. It's 2x because we first use a strategy of trying to remove a call without doing anything else (which lowers block/time number in the following blocks, because we maintain block/time delays per call). Then we try to remove them while adding the removed call's block/time delay to the previous call (which maintains block/time numbers for following blocks).TODOs:
shrinkLimit
should always be a multiple higher of your call sequence limit to ensure you get call removal + some value minimization. Technically, it should be>=3*maxCallSequenceLen
for all calls to get some removal/value minimization in the worst case, but that could change one day so I would avoid documenting it with a hard number.