-
Notifications
You must be signed in to change notification settings - Fork 117
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[CORE-824] x/ratelimit
: Implement UpdateCapacityEndBlocker
#941
Conversation
WalkthroughThe codebase has undergone a series of updates to integrate block time tracking into the rate limiting mechanism. The Changes
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (invoked as PR comments)
Additionally, you can add CodeRabbit Configration File (
|
@@ -588,10 +588,22 @@ func New( | |||
transferModule := transfer.NewAppModule(app.TransferKeeper) | |||
transferIBCModule := transfer.NewIBCModule(app.TransferKeeper) | |||
|
|||
app.BlockTimeKeeper = *blocktimemodulekeeper.NewKeeper( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This code block is moved from below to before the app.RateLimitKeeper
initialization.
@@ -122,17 +128,13 @@ func (k Keeper) ProcessDeposit( | |||
// baseline = max(baseline_minimum, baseline_tvl_ppm * current_tvl) | |||
func (k Keeper) GetBaseline( | |||
ctx sdk.Context, | |||
denom string, | |||
currentSupply *big.Int, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Change function signature, because whereever GetBaseline
is called for a limiter, that caller function could've called bankKeeper.GetSupply()
for all limiters of a denom.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Two questions:
- for my understanding, is the baseline the same across all denoms? what's the reasoning for this?
- nit: should we either call this
currentTVL
to match the formula in the comment or update the comment formula?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- For my understanding, is the baseline the same across all denoms? what's the reasoning for this?
The baseline is not the same across denoms. It is a function of LimitParams
and TVL
(currentSupply
). See definition here
nit: should we either call this currentTVL to match the formula in the comment or update the comment formula?
Updated terminology to TVL
for consistency
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 0
Configuration used: CodeRabbit UI
Files selected for processing (6)
- protocol/app/app.go (2 hunks)
- protocol/lib/metrics/constants.go (1 hunks)
- protocol/x/ratelimit/keeper/keeper.go (5 hunks)
- protocol/x/ratelimit/keeper/keeper_test.go (3 hunks)
- protocol/x/ratelimit/module.go (2 hunks)
- protocol/x/ratelimit/types/expected_keepers.go (1 hunks)
Additional comments: 16
protocol/x/ratelimit/types/expected_keepers.go (1)
- 15-17: The new
BlockTimeKeeper
interface with theGetPreviousBlockInfo
method is correctly defined and follows Go's interface declaration best practices.protocol/x/ratelimit/module.go (2)
8-9: The addition of the
cosmossdk.io/core/appmodule
package to the import section is correct.155-156: The update to the
EndBlock
function to include a call toUpdateCapacityEndBlocker
is correct, assuming that theUpdateCapacityEndBlocker
method is implemented correctly and that error handling is managed upstream.protocol/x/ratelimit/keeper/keeper.go (5)
16-16: The addition of the
github.com/dydxprotocol/v4-chain/protocol/lib/metrics
import is correct.23-26: The addition of the
blockTimeKeeper
field to theKeeper
struct is correct and follows Go's struct declaration best practices.39-47: The update to the
NewKeeper
function to accept aBlockTimeKeeper
and initialize theKeeper
struct with it is correct.224-239: The
UpdateCapacityEndBlocker
function correctly iterates through all limit params and callsUpdateCapacityForLimitParams
for each. Ensure that the iteration and the subsequent updates are atomic and handle any potential errors appropriately.241-375: The
UpdateCapacityForLimitParams
function contains complex logic for updating capacity. Ensure that the mathematical operations are correct and that the time calculations are accurate and handle edge cases, such as negative time differences, appropriately.protocol/lib/metrics/constants.go (3)
402-402: The addition of the constant
Capacity
with the value "capacity" seems appropriate for thex/ratelimit
module. Ensure that this constant is used consistently throughout the module and that its introduction does not conflict with any existing constants or variables.403-403: The constant
RateLimitDenom
with the value "rate_limit_denom" has been added. Verify that this constant is used in the context where denomination is required for rate limiting and that it aligns with the system's denomination conventions.404-404: The addition of
LimiterIndex
with the value "limiter_index" is noted. This constant should be used to reference a specific index in rate limiting structures or algorithms. Confirm that the usage of this constant is consistent and clear in its purpose within the rate limiting logic.protocol/x/ratelimit/keeper/keeper_test.go (3)
23-23: The addition of a new constant
testDenom2
is noted. Ensure that this constant is used consistently throughout the test cases where applicable.278-342: The
TestGetBaseline
function has been updated with new test cases. It's important to ensure that the test cases cover a wide range of scenarios and that the expected outcomes are correct based on the logic of theGetBaseline
function.275-865: > Note: This review was outside the patches, so it was mapped to the patch with the greatest overlap. Original lines [344-913]
The
TestUpdateCapacityEndBlocker
function has been significantly updated with new test cases. This function is critical as it tests the logic that updates the capacity at the end of each block. It's important to verify that the test cases cover all possible scenarios, including edge cases, and that the assertions correctly validate the expected behavior of theUpdateCapacityEndBlocker
function.
- Ensure that the test cases cover scenarios where the previous capacity is less than, equal to, and greater than the baseline.
- Verify that the calculations for the expected capacity recovery are correct and that they adhere to the logic defined in the
UpdateCapacityEndBlocker
function.- Confirm that the test cases handle different denominations correctly, especially with the addition of
testDenom2
.- Check that the error cases, such as when the current block time is equal to the previous block time and when the lengths of limiters and capacity lists do not match, are properly tested and that the expected behavior is asserted.
Verification successful
The provided output from the script execution shows the full content of the
TestUpdateCapacityEndBlocker
function. The test cases within this function cover a variety of scenarios:
- "One denom, prev capacity equals baseline": Tests if the capacity remains the same when it is already at the baseline.
- "One denom, prev capacity < baseline": Tests the recovery of capacity when it is below the baseline.
- "One denom, prev capacity < baseline, 18 decimals": Similar to the previous test but with a different denomination precision.
- "One denom, prev capacity = 0": Tests the recovery from zero capacity.
- "One denom, baseline < prev capacity < 2 * baseline": Tests the recovery when the capacity is between the baseline and double the baseline.
- "One denom, prev capacity > 2 * baseline": Tests the recovery when the capacity is greater than double the baseline.
- "Two denoms, mix of values from above cases": Tests the recovery with two different denominations and various capacities.
- "(Error) one denom, current block time = prev block time, no changes applied": Tests the error case where the block times are the same.
- "(Error) one denom, len(limiters) != len(capacityList)": Tests the error case where the lengths of limiters and capacity lists do not match.
The test cases cover scenarios where the previous capacity is less than, equal to, and greater than the baseline. They also handle different denominations correctly, including the addition of
testDenom2
. The error cases are properly tested, with assertions checking for unchanged capacities when the block times are the same and handling mismatches in the lengths of limiters and capacity lists.Based on the provided output, the review comment is consistent with the codebase, as the test cases and assertions align with the expectations outlined in the comment. The calculations for the expected capacity recovery are present and appear to be correct, adhering to the logic defined in the
UpdateCapacityEndBlocker
function.Scripts Executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Verify that the test cases for UpdateCapacityEndBlocker cover all scenarios. rg --type go "TestUpdateCapacityEndBlocker" protocol/x/ratelimit/keeper/keeper_test.go # Verify that the calculations for expected capacity recovery are correct. rg --type go "expectedDenomCapacityList" protocol/x/ratelimit/keeper/keeper_test.go # Ensure that the error cases are properly tested. rg --type go "(Error)" protocol/x/ratelimit/keeper/keeper_test.goLength of output: 1031
Script:
#!/bin/bash # Extract the content of the TestUpdateCapacityEndBlocker function to verify the test cases and calculations. rg --type go -A 30 "func TestUpdateCapacityEndBlocker" protocol/x/ratelimit/keeper/keeper_test.goLength of output: 1131
Script:
#!/bin/bash # Extract the full content of the TestUpdateCapacityEndBlocker function to verify the test cases and calculations. rg --type go -A 1000 "func TestUpdateCapacityEndBlocker" protocol/x/ratelimit/keeper/keeper_test.goLength of output: 17802
protocol/app/app.go (2)
591-606: The changes to
app.BlockTimeKeeper
andapp.RatelimitKeeper
initialization seem to align with the PR objectives and the AI-generated summary. TheblockTimeModule
initialization has been removed, and instead,app.BlockTimeKeeper
is being directly used in theapp.RatelimitKeeper
declaration. This change should be carefully tested to ensure that theBlockTimeKeeper
is correctly integrated into theRatelimitKeeper
and that the removal ofblockTimeModule
initialization does not have unintended side effects elsewhere in the application.591-606: It's important to ensure that the new
BlockTimeKeeper
field in theKeeper
struct is properly initialized everywhere it's used. This change could potentially affect various parts of the application, and all usages need to be consistent with the new initialization logic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall lgtm. Will rely on Brendan to double check the math
@@ -122,17 +128,13 @@ func (k Keeper) ProcessDeposit( | |||
// baseline = max(baseline_minimum, baseline_tvl_ppm * current_tvl) | |||
func (k Keeper) GetBaseline( | |||
ctx sdk.Context, | |||
denom string, | |||
currentSupply *big.Int, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Two questions:
- for my understanding, is the baseline the same across all denoms? what's the reasoning for this?
- nit: should we either call this
currentTVL
to match the formula in the comment or update the comment formula?
func (k Keeper) UpdateCapacityForLimitParams( | ||
ctx sdk.Context, | ||
limitParams types.LimitParams, | ||
) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: this is a 100+ line logic method. Is there room to break this into smaller helper methods? For example:
GetCapacity
GetCapacityDiff
CalcNewCapacity
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree this function is too large. Broke down according to @BrendanChou 's suggestion below!
// On a high level, `capacity` trends towards `baseline` by `capacity_diff` but does not “cross” it. | ||
func (k Keeper) UpdateCapacityForLimitParams( | ||
ctx sdk.Context, | ||
limitParams types.LimitParams, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For safety, should we validate LimitParams
and DenomCapacity
values exist instead of assuming? Does it make sense to validate their values too?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In general, I'm not a fan of passing values from function A to function B and then doing validations in function B. Instead the better pattern is to clearly document assumptions.
Similarly, LimitParams
is an input to updateCapacityForLimitParams
and therefore it is the responsibility of caller function to pass in correct value.
We actually do not assume DenomCapacity
to exist in this function. If it does not, the GetDenomCapacity
will return an empty list, causing the code to throw an error in if condition if len(capacityList) != len(limitParams.Limiters)
. Updated comment to reflect this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update: checking this under in CalculateNewCapacityList
blockTimeMilli := ctx.BlockTime().UnixMilli() | ||
prevBlockInfo := k.blockTimeKeeper.GetPreviousBlockInfo(ctx) | ||
prevBlockTimeMilli := prevBlockInfo.Timestamp.UnixMilli() | ||
timeSinceLastBlockMilli := new(big.Int).Sub( | ||
big.NewInt(blockTimeMilli), | ||
big.NewInt(prevBlockTimeMilli), | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Feels like this should be a lib function
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could just be a keeper function on the blockTimeKeeper
as well
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added blockTimeKeeper.GetTimeSinceLastBlock
// Calculate right operand: `time_since_last_block / period` | ||
periodMilli := new(big.Int).Mul( | ||
new(big.Int).SetUint64(uint64(limiter.PeriodSec)), | ||
big.NewInt(1000), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: instead of using 1000 explicitly, would it make sense to use time
lib?
duration := time.Duration(seconds) * time.Second
milliseconds := duration.Milliseconds()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think either could work. Since we eventually need to cast to big.Int
, I actually like the current way better, which doesn't give possibility for int overflow. Let me know if you disagree!
newCapacityList[i] = dtypes.NewIntFromBigInt( | ||
new(big.Int).Add( | ||
capacityList[i].BigInt(), | ||
lib.BigRatRound(capacityDiff, false), // rounds down `capacity_diff` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: didnt look at the unit tests closely, can you confirm that we have unit tests that verifies the round down behaviors for this method?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes wo do!
// Emit telemetry for the new capacity. | ||
telemetry.SetGaugeWithLabels( | ||
[]string{types.ModuleName, metrics.Capacity}, | ||
metrics.GetMetricValueFromBigInt(newCapacityList[i].BigInt()), | ||
[]gometrics.Label{ | ||
metrics.GetLabelForStringValue(metrics.RateLimitDenom, limitParams.Denom), | ||
metrics.GetLabelForIntValue(metrics.LimiterIndex, i), | ||
}, | ||
) | ||
} | ||
|
||
k.SetDenomCapacity(ctx, types.DenomCapacity{ | ||
Denom: limitParams.Denom, | ||
CapacityList: newCapacityList, | ||
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of having a function that:
- Determines the new value
- Emits telemetry
- Writes the new value to state
I suggest the following to be more clear, modular, and unit-testable:
- One function to determine the new value, but not update it in state (call the state update outside of this function)
- Move the telemetry to
SetDenomCapacity
such that it automatically does the telemetry on writes (and you don't have to have telemetry before every time you callSetDenomCapacity
blockTimeMilli := ctx.BlockTime().UnixMilli() | ||
prevBlockInfo := k.blockTimeKeeper.GetPreviousBlockInfo(ctx) | ||
prevBlockTimeMilli := prevBlockInfo.Timestamp.UnixMilli() | ||
timeSinceLastBlockMilli := new(big.Int).Sub( | ||
big.NewInt(blockTimeMilli), | ||
big.NewInt(prevBlockTimeMilli), | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could just be a keeper function on the blockTimeKeeper
as well
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 0
Configuration used: CodeRabbit UI
Files selected for processing (11)
- protocol/x/blocktime/keeper/keeper.go (1 hunks)
- protocol/x/blocktime/keeper/keeper_test.go (1 hunks)
- protocol/x/blocktime/module.go (1 hunks)
- protocol/x/ratelimit/keeper/keeper.go (5 hunks)
- protocol/x/ratelimit/keeper/keeper_test.go (3 hunks)
- protocol/x/ratelimit/module.go (2 hunks)
- protocol/x/ratelimit/types/expected_keepers.go (1 hunks)
- protocol/x/ratelimit/util/baseline.go (1 hunks)
- protocol/x/ratelimit/util/baseline_test.go (1 hunks)
- protocol/x/ratelimit/util/capacity.go (1 hunks)
- protocol/x/ratelimit/util/capacity_test.go (1 hunks)
Files skipped from review due to trivial changes (1)
- protocol/x/blocktime/module.go
Files skipped from review as they are similar to previous changes (3)
- protocol/x/ratelimit/keeper/keeper_test.go
- protocol/x/ratelimit/module.go
- protocol/x/ratelimit/types/expected_keepers.go
Additional comments: 13
protocol/x/ratelimit/util/baseline.go (1)
- 10-25: The implementation of
GetBaseline
function appears to be logically sound and correctly utilizes thelib.BigMax
function to calculate the baseline. The code is clean, well-documented, and follows best practices.protocol/x/ratelimit/util/baseline_test.go (1)
- 13-74: The
TestGetBaseline
function is well-implemented with comprehensive test cases covering various scenarios. The assertions are correctly usingrequire.Equal
to ensure the expected behavior of theGetBaseline
function.protocol/x/ratelimit/util/capacity.go (1)
- 12-94: The
CalculateNewCapacityList
function is complex but well-documented and appears to be correctly implemented. The use ofbig.Rat
andbig.Int
for mathematical operations is appropriate. Ensure that this logic is covered by comprehensive unit tests to validate the correctness of the calculations.protocol/x/blocktime/keeper/keeper.go (1)
- 82-86: The addition of the
GetTimeSinceLastBlock
function is correctly implemented, with a clear function signature and appropriate logic to calculate the time difference between blocks using the Gotime
package.protocol/x/blocktime/keeper/keeper_test.go (1)
- 247-288: The
TestGetTimeSinceLastBlock
function is well-implemented with comprehensive test cases covering various scenarios, including realistic values. The assertions are correctly usingrequire.Equal
to ensure the expected behavior of theGetTimeSinceLastBlock
function.protocol/x/ratelimit/keeper/keeper.go (7)
12-19: The addition of new imports
github.com/cosmos/cosmos-sdk/telemetry
andgithub.com/dydxprotocol/v4-chain/protocol/lib/metrics
is noted. Ensure these packages are used appropriately in the code and that they do not introduce any security or performance issues.24-27: The
Keeper
struct has been updated with a new fieldblockTimeKeeper
. This change aligns with the PR objectives to track block time within the rate limiting logic.40-48: The
NewKeeper
function has been modified to initialize theKeeper
struct with the newblockTimeKeeper
field. Ensure that all instances whereNewKeeper
is called have been updated to pass the new argument.Verification successful
The search results from
protocol/x/ratelimit/keeper/keeper.go
andprotocol/app/app.go
show the declaration of theNewKeeper
function in theratelimit
module and its instantiation inapp.go
. The instantiation line inapp.go
:app.RatelimitKeeper = *ratelimitmodulekeeper.NewKeeper(indicates that
NewKeeper
is being called, but the output does not show the full list of arguments being passed to the function. To verify that theblockTimeKeeper
argument is included in the call, we need to see the complete line of code whereNewKeeper
is instantiated inapp.go
.Since the output is truncated, we need to extract the full line to confirm that the
blockTimeKeeper
argument is being passed correctly. Let's generate a script to display the full line of theNewKeeper
instantiation inapp.go
.
The extended context from
app.go
shows the instantiation of various module keepers, including theRatelimitKeeper
. However, the output is still truncated and does not show the full list of arguments being passed to theNewKeeper
function for theRatelimitKeeper
. To ensure that theblockTimeKeeper
argument is included, we need to see the complete instantiation line.To resolve this, we will generate a script to display the full instantiation line of the
RatelimitKeeper
inapp.go
, ensuring we capture enough context to see all the arguments.
The output from the script provides the full instantiation line of the
RatelimitKeeper
inapp.go
, which includes theapp.BlockTimeKeeper
as an argument:app.RatelimitKeeper = *ratelimitmodulekeeper.NewKeeper( appCodec, keys[ratelimitmoduletypes.StoreKey], app.BankKeeper, app.BlockTimeKeeper, // set the governance and delaymsg module accounts as the authority for conducting upgrades []string{ lib.GovModuleAddress.String(), delaymsgmoduletypes.ModuleAddress.String(), }, )This confirms that the
NewKeeper
function in thex/ratelimit
module is being called with the newblockTimeKeeper
field as required by the updated function signature. Therefore, the review comment has been addressed correctly in the codebase.* 149-154: The `SetLimitParams` function now uses the `blockTimeKeeper` and `currentTvl` parameters. Ensure that the logic for initializing the capacity list with the current baseline is correct and that the `currentTvl` is being used appropriately.Scripts Executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for instances where NewKeeper is called to ensure the new argument is being passed. ast-grep --lang go --pattern $'NewKeeper($_, $_, $_, $_, $_)'Length of output: 62
Script:
#!/bin/bash # Search for instances where NewKeeper is called with any number of arguments to ensure the new argument is being passed. ast-grep --lang go --pattern $'NewKeeper($$$)'Length of output: 47
Script:
#!/bin/bash # Use ripgrep to search for the string "NewKeeper" to find where it is called in the codebase. rg 'NewKeeper'Length of output: 3968
Script:
#!/bin/bash # Extract the lines where NewKeeper is called in the protocol/x/ratelimit/keeper/keeper.go file and the app.go file. rg 'NewKeeper' protocol/x/ratelimit/keeper/keeper.go rg 'NewKeeper' protocol/app/app.goLength of output: 1574
Script:
#!/bin/bash # Display the full line of code where NewKeeper is instantiated in app.go. rg -C 3 'NewKeeper' protocol/app/app.goLength of output: 6208
Script:
#!/bin/bash # Display the full instantiation line of the RatelimitKeeper in app.go. rg -C 10 'app.RatelimitKeeper = \*ratelimitmodulekeeper.NewKeeper' protocol/app/app.goLength of output: 759
- 219-234: The
UpdateAllCapacitiesEndBlocker
function has been added to update the capacity for all limit params during theEndBlocker
. Ensure that this function is called appropriately in theEndBlock
routine and that it handles errors and edge cases correctly.Verification successful
I will correct the syntax error in the ast-grep pattern and re-run the script to verify if the
UpdateAllCapacitiesEndBlocker
function is called within theEndBlock
routine.
I will revise the script to use a single pattern that correctly matches the
EndBlock
function and checks for the call toUpdateAllCapacitiesEndBlocker
.
I will now use the
rg
command to search for the stringUpdateAllCapacitiesEndBlocker
to confirm its presence and usage within theEndBlock
function.
The search results confirm that the
UpdateAllCapacitiesEndBlocker
function is indeed called within theEndBlock
routine, as indicated by its presence in theprotocol/x/ratelimit/module.go
file. Additionally, there are test functions incapacity_test.go
andkeeper_test.go
that reference this function, which suggests that it is also covered by unit tests.Based on this information, the review comment has been verified, and the
UpdateAllCapacitiesEndBlocker
function is appropriately called in theEndBlock
routine.* 236-283: The `updateCapacityForLimitParams` function has been added to calculate the current baseline for a denom and recover some amount of capacity towards the baseline. Ensure that the assumptions made within this function are documented and that the error handling is sufficient.Scripts Executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Verify that UpdateAllCapacitiesEndBlocker is called in the EndBlock routine. ast-grep --lang go --pattern $'EndBlock($_) { $$$ UpdateAllCapacitiesEndBlocker($_) $$$ }'Length of output: 572
Script:
#!/bin/bash # Correct the pattern syntax and verify that UpdateAllCapacitiesEndBlocker is called in the EndBlock routine. ast-grep --lang go --pattern $'func($_) { $$$ UpdateAllCapacitiesEndBlocker($$$) $$$ }' --pattern 'func EndBlock'Length of output: 281
Script:
#!/bin/bash # Verify that UpdateAllCapacitiesEndBlocker is called in the EndBlock routine using a single pattern. ast-grep --lang go --pattern $'func($_) { $$$ UpdateAllCapacitiesEndBlocker($$$) $$$ }'Length of output: 88
Script:
#!/bin/bash # Search for the string UpdateAllCapacitiesEndBlocker to confirm its usage in the EndBlock function. rg "UpdateAllCapacitiesEndBlocker"Length of output: 654
Verification successful
To verify the documentation and error handling within the
updateCapacityForLimitParams
function, I will generate a script to display the comments within the function and to examine the error handling code.
To properly review the documentation and error handling within the
updateCapacityForLimitParams
function, I will generate a corrected script to display the comments and the function's code.
The output from the
ast-grep
andrg
commands provides the necessary context to verify the documentation and error handling within theupdateCapacityForLimitParams
function.From the output, we can observe the following:
The function
updateCapacityForLimitParams
includes a comment that states it "calculates current baseline for a denom and recovers some amount of capacity towards baseline" and that it "Assumes that theLimitParams
exist in state." This comment documents the assumption made within the function.The error handling within the function is done through logging errors instead of panicking. This is seen in two places:
- If the length of the
capacityList
does not match the length oflimitParams.Limiters
, an error is logged, and the function returns without performing further actions.- If
timeSinceLastBlock
is less than 0, which violates the invariant that the current block time should be greater than the previous block time, an error is logged, and the function returns.Based on the provided context, the assumptions within the
updateCapacityForLimitParams
function are documented, and the error handling is done through logging without causing a panic in theEndBlocker
. This approach is consistent with the intent to avoid disrupting the blockchain's block processing.* 206-216: Telemetry code has been added to emit metrics for the new capacity list. Ensure that the telemetry is set up correctly and that it does not leak any sensitive information.Scripts Executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Verify that the assumptions within updateCapacityForLimitParams are documented and that error handling is sufficient. echo "Review the documentation and error handling within the updateCapacityForLimitParams function."Length of output: 196
Script:
#!/bin/bash # Display comments within the updateCapacityForLimitParams function to check for documentation of assumptions. ast-grep --lang go --pattern $'func (k Keeper) updateCapacityForLimitParams($_, $_) { $$$ }' --source-file protocol/x/ratelimit/keeper/keeper.go # Check for error handling within the updateCapacityForLimitParams function. rg "updateCapacityForLimitParams" protocol/x/ratelimit/keeper/keeper.goLength of output: 640
Script:
#!/bin/bash # Display comments within the updateCapacityForLimitParams function to check for documentation of assumptions. ast-grep --lang go --pattern $'func (k Keeper) updateCapacityForLimitParams($_, $_) { $$$ }' # Check for error handling within the updateCapacityForLimitParams function. rg -C 10 "updateCapacityForLimitParams" protocol/x/ratelimit/keeper/keeper.goLength of output: 4502
Verification successful
To verify the correct setup of the telemetry code and ensure it does not leak sensitive information, I will need to:
- Review the implementation of the telemetry code to confirm it adheres to best practices and the expected standards set by the Cosmos SDK, which is commonly used in blockchain protocols.
- Check the data being emitted by the telemetry code to ensure it does not include sensitive information such as private keys, user identities, or any other data that should remain confidential.
I will generate a script to search for the usage of the telemetry code within the
keeper.go
file and review the related code to ensure it is set up correctly and does not leak sensitive information.
To further verify the correct setup of the telemetry code and ensure it does not leak sensitive information, I will generate a script to display the entire context around the usage of the
telemetry.SetGaugeWithLabels
function within thekeeper.go
file. This will allow me to review the data being passed to the telemetry system.
The telemetry code in
keeper.go
is set up to emit metrics for each capacity in thedenomCapacity.CapacityList
. It uses thetelemetry.SetGaugeWithLabels
function to set a gauge with labels that include non-sensitive information such as the rate limit denomination and the limiter index. The metric value is derived from theBigInt
representation of the capacity, which is a numerical value and does not constitute sensitive information.Based on the context provided, the telemetry setup appears to be correct and does not leak sensitive information.
Scripts Executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Verify that telemetry setup is correct and does not leak sensitive information. echo "Review the telemetry code to ensure correct setup and no sensitive information leakage."Length of output: 184
Script:
#!/bin/bash # Search for the usage of telemetry in keeper.go and output the relevant code sections. rg "telemetry" protocol/x/ratelimit/keeper/keeper.goLength of output: 172
Script:
#!/bin/bash # Display the context around the usage of telemetry.SetGaugeWithLabels in keeper.go rg -C 10 "telemetry.SetGaugeWithLabels" protocol/x/ratelimit/keeper/keeper.goLength of output: 614
protocol/x/ratelimit/util/capacity_test.go (1)
- 15-269: The
TestUpdateAllCapacitiesEndBlocker
function has been added to test the new capacity update logic. Ensure that the test cases cover a wide range of scenarios, including edge cases, and that the assertions are correct.Verification successful
To verify the test cases in
TestUpdateAllCapacitiesEndBlocker
, we need to analyze the code to ensure that it covers a wide range of scenarios. Since the output from the shell script does not provide this information, we will need to generate a new script to check the test cases programmatically or continue with a manual review based on the code provided.Let's proceed with generating a script to check the presence of various test scenarios within the
TestUpdateAllCapacitiesEndBlocker
function.
To fully verify the test scenarios in
TestUpdateAllCapacitiesEndBlocker
, we need to see the complete set of test cases. The output provided is truncated, so we will generate a new script to retrieve the entire test function.
The output provided is still truncated and does not show the complete set of test scenarios within the
TestUpdateAllCapacitiesEndBlocker
function. To ensure we have a comprehensive view of the test coverage, we need to retrieve more lines of the test function. Let's generate a new script to do so.
The output provided is still truncated and does not show the complete set of test scenarios within the
TestUpdateAllCapacitiesEndBlocker
function. However, the scenarios that are visible indicate a variety of test conditions, including different relationships between previous capacity and baseline, different decimal precisions, and different time intervals since the last block.To ensure we have a comprehensive view of the test coverage, we need to retrieve more lines of the test function. Let's generate a new script to do so.
The output now includes the full
TestUpdateAllCapacitiesEndBlocker
function with all its test cases. The test cases cover various scenarios, including:
- "Prev capacity equals baseline"
- "Prev capacity < baseline"
- "prev capacity < baseline, 18 decimals"
- "Prev capacity = 0"
- "Prev capacity = 0, capacity_diff rounds down"
- "Prev capacity = 2 * baseline, capacity_diff rounds down"
- "baseline < prev capacity < 2 * baseline"
- "prev capacity > 2 * baseline + capacity < baseline"
These scenarios test different conditions related to the previous capacity in relation to the baseline, different decimal precisions, and different time intervals since the last block. The variety of test cases suggests comprehensive coverage of the functionality being tested.
Based on the provided test scenarios, it appears that the test function
TestUpdateAllCapacitiesEndBlocker
has been thoroughly implemented to cover a wide range of cases, which is crucial for ensuring the reliability of the capacity update logic in thex/ratelimit
module.Scripts Executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Verify that the test cases in TestUpdateAllCapacitiesEndBlocker cover a wide range of scenarios. echo "Review the test cases in TestUpdateAllCapacitiesEndBlocker to ensure comprehensive coverage."Length of output: 194
Script:
#!/bin/bash # Check for the presence of different test scenarios in TestUpdateAllCapacitiesEndBlocker rg "func TestUpdateAllCapacitiesEndBlocker" protocol/x/ratelimit/util/capacity_test.go -A 30Length of output: 1276
Script:
#!/bin/bash # Retrieve the full test function TestUpdateAllCapacitiesEndBlocker to verify all test scenarios rg "func TestUpdateAllCapacitiesEndBlocker" protocol/x/ratelimit/util/capacity_test.go -A 100Length of output: 3855
Script:
#!/bin/bash # Retrieve the full test function TestUpdateAllCapacitiesEndBlocker to verify all test scenarios, ensuring we capture all test cases rg "func TestUpdateAllCapacitiesEndBlocker" protocol/x/ratelimit/util/capacity_test.go -A 200Length of output: 7613
Script:
#!/bin/bash # Retrieve the full test function TestUpdateAllCapacitiesEndBlocker to verify all test scenarios, ensuring we capture all test cases rg "func TestUpdateAllCapacitiesEndBlocker" protocol/x/ratelimit/util/capacity_test.go -A 300Length of output: 9361
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 0
Configuration used: CodeRabbit UI
Files selected for processing (2)
- protocol/x/ratelimit/keeper/keeper.go (5 hunks)
- protocol/x/ratelimit/util/capacity.go (1 hunks)
Files skipped from review as they are similar to previous changes (2)
- protocol/x/ratelimit/keeper/keeper.go
- protocol/x/ratelimit/util/capacity.go
baseline := GetBaseline(bigTvl, limiter) | ||
|
||
capacityMinusBaseline := new(big.Int).Sub( | ||
prevCapapcityList[i].BigInt(), // array access is safe because of input invariant |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the input invariant is not checked in this function which I think is a bit of an antipattern
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would it make more sense to just have the length of newCapacityList
to equal the length of the limitParams.Limiters
? In the case that the prevCapacityList
does not match in length, then ignore the old values (i.e. set all new values to the baseline of the limiter)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At second thought, I believe this invariant is expected to hold in UpdateCapacityEndBlocker
, because SetLimitParams
also atomically overwrite the DenomCapacity
list.
So I move the invariant check from caller function to this function, and returns the error if invariant failed.
periodMilli := new(big.Int).Mul( | ||
new(big.Int).SetUint64(uint64(limiter.PeriodSec)), | ||
big.NewInt(1000), | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wait why are we even using PeriodSec
which appears to be an integer? Why not just use a type that converts to time.Duration
like we do elsewhere in protos?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point, period
should be in time.Duration
. Updated proto and implementation accordingly.
capacityDiff := new(big.Rat).Mul( | ||
operandL, | ||
operandR, | ||
) | ||
|
||
bigRatcapacityMinusBaseline := new(big.Rat).SetInt(capacityMinusBaseline) | ||
|
||
if new(big.Rat).Abs(bigRatcapacityMinusBaseline).Cmp(capacityDiff) < 0 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
capacityDiff := new(big.Rat).Mul( | |
operandL, | |
operandR, | |
) | |
bigRatcapacityMinusBaseline := new(big.Rat).SetInt(capacityMinusBaseline) | |
if new(big.Rat).Abs(bigRatcapacityMinusBaseline).Cmp(capacityDiff) < 0 { | |
capacityDiffRat := new(big.Rat).Mul(operandL, operandR) | |
capacityDiff := lib.BigRatRound(capacityDiffRat, false) // rounds down | |
if new(big.Int).Abs(capacityMinusBaseline).Cmp(capacityDiff) <= 0 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also remove the lib.BigRatRound
s below
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
@@ -259,26 +259,25 @@ func (k Keeper) updateCapacityForLimitParams( | |||
tvl := k.bankKeeper.GetSupply(ctx, limitParams.Denom) | |||
|
|||
capacityList := k.GetDenomCapacity(ctx, limitParams.Denom).CapacityList | |||
if len(capacityList) != len(limitParams.Limiters) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Moved this check into CalculateNewCapacityList
which throws an error
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 2
Configuration used: CodeRabbit UI
Files ignored due to filter (2)
- protocol/x/ratelimit/types/limit_params.pb.go
- protocol/x/ratelimit/types/query.pb.go
Files selected for processing (10)
- indexer/packages/v4-protos/src/codegen/dydxprotocol/ratelimit/limit_params.ts (6 hunks)
- indexer/packages/v4-protos/src/codegen/dydxprotocol/ratelimit/query.ts (5 hunks)
- proto/dydxprotocol/ratelimit/limit_params.proto (2 hunks)
- proto/dydxprotocol/ratelimit/query.proto (2 hunks)
- protocol/x/ratelimit/keeper/keeper.go (5 hunks)
- protocol/x/ratelimit/keeper/keeper_test.go (7 hunks)
- protocol/x/ratelimit/types/errors.go (1 hunks)
- protocol/x/ratelimit/util/baseline_test.go (1 hunks)
- protocol/x/ratelimit/util/capacity.go (1 hunks)
- protocol/x/ratelimit/util/capacity_test.go (1 hunks)
Additional comments: 33
protocol/x/ratelimit/types/errors.go (1)
- 19-23: The addition of
ErrMismatchedCapacityLimitersLength
with code 1003 is correctly implemented and follows the existing pattern for error registration in the module.proto/dydxprotocol/ratelimit/limit_params.proto (2)
5-5: The import of "google/protobuf/duration.proto" is correctly added to support the new
Duration
type for theperiod
field.21-26: The change from
period_sec
toperiod
of typegoogle.protobuf.Duration
is correctly implemented and aligns with the PR objectives to use a more appropriate type for time duration.proto/dydxprotocol/ratelimit/query.proto (2)
7-7: The import of "google/protobuf/duration.proto" is correctly added to support the new
Duration
type for theperiod
field in theCapacityResult
message.40-41: The change from
period_sec
toperiod
of typegoogle.protobuf.Duration
in theCapacityResult
message is correctly implemented and aligns with the PR objectives to use a more appropriate type for time duration.protocol/x/ratelimit/util/baseline_test.go (1)
- 23-26: The test cases in
TestGetBaseline
have been updated to use theDuration
type instead of an integer for thePeriod
field. This change is consistent with the updates to theLimiter
struct.protocol/x/ratelimit/util/capacity.go (2)
41-50: The error handling for mismatched lengths between
limitParams.Limiters
andprevCapacityList
is correctly implemented, usingerrorsmod.Wrapf
to provide a detailed error message.52-99: The logic within
CalculateNewCapacityList
for updating the capacity list based on the time since the last block and the limit parameters is complex but appears to be correctly implemented. The use of big integers and rationals is appropriate for the precision required in financial calculations.indexer/packages/v4-protos/src/codegen/dydxprotocol/ratelimit/limit_params.ts (6)
1-1: The import of
Duration
andDurationSDKType
from thegoogle/protobuf/duration
module is correctly added to support the newDuration
type for theperiod
field.38-41: The replacement of the
periodSec
property withperiod
of typeDuration
in theLimiter
interface is correctly implemented and aligns with the protobuf changes.138-138: In the
createBaseLimiter
function, initializing theperiod
asundefined
is correct and aligns with the changes made to theLimiter
interface.146-147: The conditional encoding of the
period
field usingDuration.encode
is correctly implemented and checks forundefined
before attempting to encode.170-171: The decoding logic for the
period
field usingDuration.decode
is correctly implemented in thedecode
function of theLimiter
object.193-193: The
fromPartial
function of theLimiter
object correctly handles the assignment of theperiod
field usingDuration.fromPartial
.indexer/packages/v4-protos/src/codegen/dydxprotocol/ratelimit/query.ts (6)
2-2: The import of
Duration
andDurationSDKType
from thegoogle/protobuf/duration
module is correctly added to support the newDuration
type for theperiod
field in theCapacityResult
interface.50-50: The change of the
periodSec
property toperiod
in theCapacityResult
interface is correctly implemented and aligns with the protobuf changes.210-210: In the
createBaseCapacityResult
function, initializing theperiod
asundefined
is correct and aligns with the changes made to theCapacityResult
interface.217-218: The conditional encoding of the
period
field usingDuration.encode
is correctly implemented and checks forundefined
before attempting to encode.238-238: The decoding logic for the
period
field usingDuration.decode
is correctly implemented in thedecode
function of theCapacityResult
object.256-256: The
fromPartial
function of theCapacityResult
object correctly handles the assignment of theperiod
field usingDuration.fromPartial
.protocol/x/ratelimit/keeper/keeper.go (7)
25-28: The addition of the
blockTimeKeeper
field to theKeeper
struct is correctly implemented and aligns with the PR objectives to track block time within the rate limiting logic.41-49: The
NewKeeper
function has been correctly updated to include theblockTimeKeeper
parameter and initialize theKeeper
struct with it.150-155: The use of
ratelimitutil.GetBaseline
in theSetLimitParams
function is correct and reflects the changes made to theLimiter
struct and associated utility functions.207-217: The telemetry emission for the new capacity list in the
SetDenomCapacity
function is correctly implemented, using appropriate labels and metrics.220-249: The
UpdateAllCapacitiesEndBlocker
function is correctly implemented to update the capacity for all limit params at the end of each block, aligning with the PR objectives.251-284: The
updateCapacityForLimitParams
function is correctly implemented, assuming that theLimitParams
exist in state and updating the capacity list based on the time since the last block.256-256: The previous comment chain indicates that the function
updateCapacityForLimitParams
was too large and has been broken down according to a suggestion. Since the function is still quite large, it would be beneficial to verify that the refactoring has been completed as intended.protocol/x/ratelimit/util/capacity_test.go (3)
26-285: The calculations for the expected capacity list in the test cases appear to be correct and align with the described logic for rate limiting. Constants and calculations are used appropriately to simulate the expected behavior of the
UpdateAllCapacitiesEndBlocker
function. The use ofbig_testutil
for handling big integers with high precision is noted and seems appropriate for the context of blockchain and token calculations.254-285: The error handling test case correctly anticipates a mismatch in the lengths of the capacity list and the limiters. The use of
errorsmod.Wrapf
to provide detailed error messages is a good practice, as it aids in debugging. Ensure that the error message is consistent with the actual error condition and provides enough context for troubleshooting.290-310: The use of
require
assertions is appropriate for these test cases. It ensures that the test will stop if a condition is not met, which is suitable for conditions that must be true for the test to proceed meaningfully. The assertions are used correctly to compare the expected and actual capacity lists and to check for errors when expected.protocol/x/ratelimit/keeper/keeper_test.go (3)
23-23: The addition of the constant
testDenom2
is noted and seems to be in line with the existing pattern of constants for test denominations.78-83: The
TestSetGetLimitParams_Success
function has been modified to usetime.Duration
instead of seconds for thePeriod
field intypes.Limiter
. This is a positive change for clarity and type safety.273-797: > Note: This review was outside the patches, so it was mapped to the patch with the greatest overlap. Original lines [276-845]
The
TestUpdateAllCapacitiesEndBlocker
function has been significantly modified to handletime.Time
instead ofint64
for block times and to usetime.Duration
for periods. The test cases have been updated accordingly, and new test cases have been added to cover various scenarios related to capacity updates. It's important to ensure that these changes are consistent with the new logic for updating capacities at the end of each block. The use oftime.Time
andtime.Duration
is a good practice for time-related operations, providing type safety and clarity.However, there are a couple of test cases labeled as "(Error)" which seem to simulate error conditions, such as when the current block time is equal to the previous block time, and when the lengths of limiters and capacity lists do not match. It's crucial to ensure that these error conditions are handled gracefully in the actual implementation and that appropriate error handling or logging is in place.
Additionally, the test cases make use of precise calculations for expected values (e.g., lines 383-384, 447-450). It's important to verify that these calculations are correct and that they accurately reflect the intended logic of the capacity update mechanism.
Lastly, the test cases should be reviewed to ensure that they cover all relevant edge cases and that the assertions are correctly validating the expected outcomes.
}, | ||
}, | ||
}, | ||
timeSinceLastBlock: time.Second + 150*time.Millisecond, // 1.15 second | ||
prevCapapcityList: []dtypes.SerializableInt{ | ||
dtypes.NewInt(329_000_000_000), | ||
dtypes.NewInt(3_500_000_000_000), | ||
}, | ||
expectedCapacityList: []dtypes.SerializableInt{ | ||
dtypes.NewInt(328_936_111_112), // recovered by 1.15/3600 * 200k ~= 63.89 | ||
dtypes.NewInt(3_499_973_379_630), // recovered by 1.15/86400 * 2M ~= 26.62 | ||
}, | ||
}, | ||
"prev capacity > 2 * baseline + capacity < baseline": { | ||
bigTvl: big.NewInt(20_000_000_000_000), // 20M token (assuming 6 decimals) | ||
limitParams: types.LimitParams{ | ||
Denom: testDenom, | ||
Limiters: []types.Limiter{ | ||
// baseline = 200k tokens | ||
{ | ||
Period: 3_600 * time.Second, | ||
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals) | ||
BaselineTvlPpm: 10_000, // 1% | ||
}, | ||
// baseline = 2M tokens | ||
{ | ||
Period: 86_400 * time.Second, | ||
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals) | ||
BaselineTvlPpm: 100_000, // 10% | ||
}, | ||
}, | ||
}, | ||
timeSinceLastBlock: time.Second + 150*time.Millisecond, // 1.15 second | ||
prevCapapcityList: []dtypes.SerializableInt{ | ||
dtypes.NewInt(629_000_000_000), // > 2 * baseline | ||
dtypes.NewInt(1_200_000_000_000), // < baseline | ||
}, | ||
expectedCapacityList: []dtypes.SerializableInt{ | ||
dtypes.NewInt(628_862_958_334), // recovered by 1.15/3600 * (629k - 200k) ~= 137.04 | ||
dtypes.NewInt(1_200_026_620_370), // recovered by 1.15/86400 * 2M ~= 26.62 | ||
}, | ||
}, | ||
"Error: len(capacityList) != len(limiters)": { | ||
bigTvl: big.NewInt(25_000_000_000_000), // 25M token (assuming 6 decimals) | ||
limitParams: types.LimitParams{ | ||
Denom: testDenom, | ||
Limiters: []types.Limiter{ | ||
// baseline = 25M * 1% = 250k tokens | ||
{ | ||
Period: 3_600 * time.Second, | ||
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals) | ||
BaselineTvlPpm: 10_000, // 1% | ||
}, | ||
// baseline = 25M * 10% = 2.5M tokens | ||
{ | ||
Period: 86_400 * time.Second, | ||
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals) | ||
BaselineTvlPpm: 100_000, // 10% | ||
}, | ||
}, | ||
}, | ||
timeSinceLastBlock: time.Second + 90*time.Millisecond, // 1.09 second | ||
prevCapapcityList: []dtypes.SerializableInt{ | ||
dtypes.NewInt(99_000_000_000), | ||
dtypes.NewInt(990_000_000_000), | ||
dtypes.NewInt(0), | ||
}, | ||
expectedErr: errorsmod.Wrapf( | ||
types.ErrMismatchedCapacityLimitersLength, | ||
"denom = %v, len(limiters) = %v, len(prevCapacityList) = %v", | ||
testDenom, | ||
2, | ||
3, | ||
), | ||
}, | ||
} | ||
|
||
// Run tests. | ||
for name, tc := range tests { | ||
t.Run(name, func(t *testing.T) { | ||
newCapacityList, err := util.CalculateNewCapacityList( | ||
tc.bigTvl, | ||
tc.limitParams, | ||
tc.prevCapapcityList, | ||
tc.timeSinceLastBlock, | ||
) | ||
|
||
if tc.expectedErr != nil { | ||
require.Error(t, tc.expectedErr, err) | ||
return | ||
} | ||
|
||
require.NoError(t, err) | ||
require.Equal(t, | ||
tc.expectedCapacityList, | ||
newCapacityList, | ||
) | ||
}) | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The test function TestUpdateAllCapacitiesEndBlocker
is comprehensive and covers a variety of scenarios, including edge cases and error handling. The use of table-driven tests is a good practice for clarity and maintainability. However, there is a typo in the field name prevCapapcityList
which should be corrected to prevCapacityList
to maintain consistency and avoid potential confusion.
- prevCapapcityList
+ prevCapacityList
Committable suggestion
❗ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
func TestUpdateAllCapacitiesEndBlocker(t *testing.T) { | |
testDenom := "testDenom" | |
tests := map[string]struct { | |
bigTvl *big.Int | |
limitParams types.LimitParams | |
prevCapapcityList []dtypes.SerializableInt | |
expectedCapacityList []dtypes.SerializableInt | |
timeSinceLastBlock time.Duration | |
expectedErr error | |
}{ | |
"Prev capacity equals baseline": { | |
bigTvl: big.NewInt(25_000_000_000_000), // 25M token (assuming 6 decimals) | |
limitParams: types.LimitParams{ | |
Denom: testDenom, | |
Limiters: []types.Limiter{ | |
// baseline = 25M * 1% = 250k tokens | |
{ | |
Period: 3_600 * time.Second, | |
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals) | |
BaselineTvlPpm: 10_000, // 1% | |
}, | |
// baseline = 25M * 10% = 2.5M tokens | |
{ | |
Period: 86_400 * time.Second, | |
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals) | |
BaselineTvlPpm: 100_000, // 10% | |
}, | |
}, | |
}, | |
timeSinceLastBlock: time.Second, | |
prevCapapcityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(250_000_000_000), // 250k tokens, which equals baseline | |
dtypes.NewInt(2_500_000_000_000), // 2.5M tokens, which equals baseline | |
}, | |
expectedCapacityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(250_000_000_000), // 250k tokens | |
dtypes.NewInt(2_500_000_000_000), // 2.5M tokens | |
}, | |
}, | |
"Prev capacity < baseline": { | |
bigTvl: big.NewInt(25_000_000_000_000), // 25M token (assuming 6 decimals) | |
limitParams: types.LimitParams{ | |
Denom: testDenom, | |
Limiters: []types.Limiter{ | |
// baseline = 25M * 1% = 250k tokens | |
{ | |
Period: 3_600 * time.Second, | |
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals) | |
BaselineTvlPpm: 10_000, // 1% | |
}, | |
// baseline = 25M * 10% = 2.5M tokens | |
{ | |
Period: 86_400 * time.Second, | |
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals) | |
BaselineTvlPpm: 100_000, // 10% | |
}, | |
}, | |
}, | |
timeSinceLastBlock: time.Second + 90*time.Millisecond, // 1.09 second | |
prevCapapcityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(99_000_000_000), // 99k tokens, < baseline (250k) | |
dtypes.NewInt(990_000_000_000), // 0.99M tokens, < baseline (2.5M) | |
}, | |
expectedCapacityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(99_075_694_444), // recovered by 1.09/3600 * 250k = 75.694444 tokens | |
dtypes.NewInt(990_031_539_351), // recovered by 1.09/86400 * 2.5M = 31.539 tokens | |
}, | |
}, | |
"prev capacity < baseline, 18 decimals": { | |
bigTvl: big_testutil.Int64MulPow10(25, 24), // 25M tokens | |
limitParams: types.LimitParams{ | |
Denom: testDenom, | |
Limiters: []types.Limiter{ | |
// baseline = 25M * 1% = 250k tokens | |
{ | |
Period: 3_600 * time.Second, | |
BaselineMinimum: dtypes.NewIntFromBigInt( | |
big_testutil.Int64MulPow10(100_000, 18), // 100k tokens(assuming 18 decimals) | |
), | |
BaselineTvlPpm: 10_000, // 1% | |
}, | |
// baseline = 25M * 10% = 2.5M tokens | |
{ | |
Period: 86_400 * time.Second, | |
BaselineMinimum: dtypes.NewIntFromBigInt( | |
big_testutil.Int64MulPow10(1_000_000, 18), // 1M tokens(assuming 18 decimals) | |
), | |
BaselineTvlPpm: 100_000, // 10% | |
}, | |
}, | |
}, | |
timeSinceLastBlock: time.Second, | |
prevCapapcityList: []dtypes.SerializableInt{ | |
dtypes.NewIntFromBigInt( | |
big_testutil.Int64MulPow10(99_000, 18), | |
), // 99k tokens < baseline (250k) | |
dtypes.NewIntFromBigInt( | |
big_testutil.Int64MulPow10(990_000, 18), | |
), // 0.99M tokens, < baseline (2.5M) | |
}, | |
expectedCapacityList: []dtypes.SerializableInt{ | |
dtypes.NewIntFromBigInt( | |
big_testutil.MustFirst(new(big.Int).SetString("99069444444444444444444", 10)), | |
), // recovered by 1/3600 * 250k ~= 69.4444 tokens | |
dtypes.NewIntFromBigInt( | |
big_testutil.MustFirst(new(big.Int).SetString("990028935185185185185185", 10)), | |
), // recovered by 1/86400 * 2.5M ~= 28.9351 tokens | |
}, | |
}, | |
"Prev capacity = 0": { | |
bigTvl: big.NewInt(1_000_000_000_000), // 1M token (assuming 6 decimals) | |
limitParams: types.LimitParams{ | |
Denom: testDenom, | |
Limiters: []types.Limiter{ | |
// baseline = baseline minimum = 100k tokens | |
{ | |
Period: 3_600 * time.Second, | |
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals) | |
BaselineTvlPpm: 10_000, // 1% | |
}, | |
// baseline = baseline minimum = 1M tokens | |
{ | |
Period: 86_400 * time.Second, | |
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals) | |
BaselineTvlPpm: 100_000, // 10% | |
}, | |
}, | |
}, | |
timeSinceLastBlock: time.Second + 150*time.Millisecond, // 1.15 second | |
prevCapapcityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(0), | |
dtypes.NewInt(0), | |
}, | |
expectedCapacityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(31_944_444), // recovered by 1.15/3600 * 100k ~= 31.94 | |
dtypes.NewInt(13_310_185), // recovered by 1.15/86400 * 1M ~= 13.31 | |
}, | |
}, | |
"Prev capacity = 0, capacity_diff rounds down": { | |
bigTvl: big.NewInt(1_000_000_000_000), // 1M token (assuming 6 decimals) | |
limitParams: types.LimitParams{ | |
Denom: testDenom, | |
Limiters: []types.Limiter{ | |
// baseline = baseline minimum = 100k tokens | |
{ | |
Period: 3_600 * time.Second, | |
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals) | |
BaselineTvlPpm: 10_000, // 1% | |
}, | |
}, | |
}, | |
timeSinceLastBlock: 12 * time.Second, // 12 second | |
prevCapapcityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(0), | |
}, | |
expectedCapacityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(333_333_333), // recovered by 12/3600 * 100k ~= 333.333 | |
}, | |
}, | |
"Prev capacity = 2 * baseline, capacity_diff rounds down": { | |
bigTvl: big.NewInt(1_000_000_000_000), // 1M token (assuming 6 decimals) | |
limitParams: types.LimitParams{ | |
Denom: testDenom, | |
Limiters: []types.Limiter{ | |
// baseline = baseline minimum = 100k tokens | |
{ | |
Period: 3_600 * time.Second, | |
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals) | |
BaselineTvlPpm: 10_000, // 1% | |
}, | |
}, | |
}, | |
timeSinceLastBlock: 12 * time.Second, // 12 second | |
prevCapapcityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(200_000_000_000), | |
}, | |
expectedCapacityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(199_666_666_667), // recovered by 12/3600 * 100k ~= 333.333 | |
}, | |
}, | |
"baseline < prev capacity < 2 * baseline": { | |
bigTvl: big.NewInt(20_000_000_000_000), // 20M token (assuming 6 decimals) | |
limitParams: types.LimitParams{ | |
Denom: testDenom, | |
Limiters: []types.Limiter{ | |
// baseline = 200k tokens | |
{ | |
Period: 3_600 * time.Second, | |
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals) | |
BaselineTvlPpm: 10_000, // 1% | |
}, | |
// baseline = 2M tokens | |
{ | |
Period: 86_400 * time.Second, | |
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals) | |
BaselineTvlPpm: 100_000, // 10% | |
}, | |
}, | |
}, | |
timeSinceLastBlock: time.Second + 150*time.Millisecond, // 1.15 second | |
prevCapapcityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(329_000_000_000), | |
dtypes.NewInt(3_500_000_000_000), | |
}, | |
expectedCapacityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(328_936_111_112), // recovered by 1.15/3600 * 200k ~= 63.89 | |
dtypes.NewInt(3_499_973_379_630), // recovered by 1.15/86400 * 2M ~= 26.62 | |
}, | |
}, | |
"prev capacity > 2 * baseline + capacity < baseline": { | |
bigTvl: big.NewInt(20_000_000_000_000), // 20M token (assuming 6 decimals) | |
limitParams: types.LimitParams{ | |
Denom: testDenom, | |
Limiters: []types.Limiter{ | |
// baseline = 200k tokens | |
{ | |
Period: 3_600 * time.Second, | |
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals) | |
BaselineTvlPpm: 10_000, // 1% | |
}, | |
// baseline = 2M tokens | |
{ | |
Period: 86_400 * time.Second, | |
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals) | |
BaselineTvlPpm: 100_000, // 10% | |
}, | |
}, | |
}, | |
timeSinceLastBlock: time.Second + 150*time.Millisecond, // 1.15 second | |
prevCapapcityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(629_000_000_000), // > 2 * baseline | |
dtypes.NewInt(1_200_000_000_000), // < baseline | |
}, | |
expectedCapacityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(628_862_958_334), // recovered by 1.15/3600 * (629k - 200k) ~= 137.04 | |
dtypes.NewInt(1_200_026_620_370), // recovered by 1.15/86400 * 2M ~= 26.62 | |
}, | |
}, | |
"Error: len(capacityList) != len(limiters)": { | |
bigTvl: big.NewInt(25_000_000_000_000), // 25M token (assuming 6 decimals) | |
limitParams: types.LimitParams{ | |
Denom: testDenom, | |
Limiters: []types.Limiter{ | |
// baseline = 25M * 1% = 250k tokens | |
{ | |
Period: 3_600 * time.Second, | |
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals) | |
BaselineTvlPpm: 10_000, // 1% | |
}, | |
// baseline = 25M * 10% = 2.5M tokens | |
{ | |
Period: 86_400 * time.Second, | |
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals) | |
BaselineTvlPpm: 100_000, // 10% | |
}, | |
}, | |
}, | |
timeSinceLastBlock: time.Second + 90*time.Millisecond, // 1.09 second | |
prevCapapcityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(99_000_000_000), | |
dtypes.NewInt(990_000_000_000), | |
dtypes.NewInt(0), | |
}, | |
expectedErr: errorsmod.Wrapf( | |
types.ErrMismatchedCapacityLimitersLength, | |
"denom = %v, len(limiters) = %v, len(prevCapacityList) = %v", | |
testDenom, | |
2, | |
3, | |
), | |
}, | |
} | |
// Run tests. | |
for name, tc := range tests { | |
t.Run(name, func(t *testing.T) { | |
newCapacityList, err := util.CalculateNewCapacityList( | |
tc.bigTvl, | |
tc.limitParams, | |
tc.prevCapapcityList, | |
tc.timeSinceLastBlock, | |
) | |
if tc.expectedErr != nil { | |
require.Error(t, tc.expectedErr, err) | |
return | |
} | |
require.NoError(t, err) | |
require.Equal(t, | |
tc.expectedCapacityList, | |
newCapacityList, | |
) | |
}) | |
} | |
} | |
func TestUpdateAllCapacitiesEndBlocker(t *testing.T) { | |
testDenom := "testDenom" | |
tests := map[string]struct { | |
bigTvl *big.Int | |
limitParams types.LimitParams | |
prevCapacityList []dtypes.SerializableInt | |
expectedCapacityList []dtypes.SerializableInt | |
timeSinceLastBlock time.Duration | |
expectedErr error | |
}{ | |
"Prev capacity equals baseline": { | |
bigTvl: big.NewInt(25_000_000_000_000), // 25M token (assuming 6 decimals) | |
limitParams: types.LimitParams{ | |
Denom: testDenom, | |
Limiters: []types.Limiter{ | |
// baseline = 25M * 1% = 250k tokens | |
{ | |
Period: 3_600 * time.Second, | |
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals) | |
BaselineTvlPpm: 10_000, // 1% | |
}, | |
// baseline = 25M * 10% = 2.5M tokens | |
{ | |
Period: 86_400 * time.Second, | |
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals) | |
BaselineTvlPpm: 100_000, // 10% | |
}, | |
}, | |
}, | |
timeSinceLastBlock: time.Second, | |
prevCapacityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(250_000_000_000), // 250k tokens, which equals baseline | |
dtypes.NewInt(2_500_000_000_000), // 2.5M tokens, which equals baseline | |
}, | |
expectedCapacityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(250_000_000_000), // 250k tokens | |
dtypes.NewInt(2_500_000_000_000), // 2.5M tokens | |
}, | |
}, | |
"Prev capacity < baseline": { | |
bigTvl: big.NewInt(25_000_000_000_000), // 25M token (assuming 6 decimals) | |
limitParams: types.LimitParams{ | |
Denom: testDenom, | |
Limiters: []types.Limiter{ | |
// baseline = 25M * 1% = 250k tokens | |
{ | |
Period: 3_600 * time.Second, | |
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals) | |
BaselineTvlPpm: 10_000, // 1% | |
}, | |
// baseline = 25M * 10% = 2.5M tokens | |
{ | |
Period: 86_400 * time.Second, | |
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals) | |
BaselineTvlPpm: 100_000, // 10% | |
}, | |
}, | |
}, | |
timeSinceLastBlock: time.Second + 90*time.Millisecond, // 1.09 second | |
prevCapacityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(99_000_000_000), // 99k tokens, < baseline (250k) | |
dtypes.NewInt(990_000_000_000), // 0.99M tokens, < baseline (2.5M) | |
}, | |
expectedCapacityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(99_075_694_444), // recovered by 1.09/3600 * 250k = 75.694444 tokens | |
dtypes.NewInt(990_031_539_351), // recovered by 1.09/86400 * 2.5M = 31.539 tokens | |
}, | |
}, | |
"prev capacity < baseline, 18 decimals": { | |
bigTvl: big_testutil.Int64MulPow10(25, 24), // 25M tokens | |
limitParams: types.LimitParams{ | |
Denom: testDenom, | |
Limiters: []types.Limiter{ | |
// baseline = 25M * 1% = 250k tokens | |
{ | |
Period: 3_600 * time.Second, | |
BaselineMinimum: dtypes.NewIntFromBigInt( | |
big_testutil.Int64MulPow10(100_000, 18), // 100k tokens(assuming 18 decimals) | |
), | |
BaselineTvlPpm: 10_000, // 1% | |
}, | |
// baseline = 25M * 10% = 2.5M tokens | |
{ | |
Period: 86_400 * time.Second, | |
BaselineMinimum: dtypes.NewIntFromBigInt( | |
big_testutil.Int64MulPow10(1_000_000, 18), // 1M tokens(assuming 18 decimals) | |
), | |
BaselineTvlPpm: 100_000, // 10% | |
}, | |
}, | |
}, | |
timeSinceLastBlock: time.Second, | |
prevCapacityList: []dtypes.SerializableInt{ | |
dtypes.NewIntFromBigInt( | |
big_testutil.Int64MulPow10(99_000, 18), | |
), // 99k tokens < baseline (250k) | |
dtypes.NewIntFromBigInt( | |
big_testutil.Int64MulPow10(990_000, 18), | |
), // 0.99M tokens, < baseline (2.5M) | |
}, | |
expectedCapacityList: []dtypes.SerializableInt{ | |
dtypes.NewIntFromBigInt( | |
big_testutil.MustFirst(new(big.Int).SetString("99069444444444444444444", 10)), | |
), // recovered by 1/3600 * 250k ~= 69.4444 tokens | |
dtypes.NewIntFromBigInt( | |
big_testutil.MustFirst(new(big.Int).SetString("990028935185185185185185", 10)), | |
), // recovered by 1/86400 * 2.5M ~= 28.9351 tokens | |
}, | |
}, | |
"Prev capacity = 0": { | |
bigTvl: big.NewInt(1_000_000_000_000), // 1M token (assuming 6 decimals) | |
limitParams: types.LimitParams{ | |
Denom: testDenom, | |
Limiters: []types.Limiter{ | |
// baseline = baseline minimum = 100k tokens | |
{ | |
Period: 3_600 * time.Second, | |
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals) | |
BaselineTvlPpm: 10_000, // 1% | |
}, | |
// baseline = baseline minimum = 1M tokens | |
{ | |
Period: 86_400 * time.Second, | |
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals) | |
BaselineTvlPpm: 100_000, // 10% | |
}, | |
}, | |
}, | |
timeSinceLastBlock: time.Second + 150*time.Millisecond, // 1.15 second | |
prevCapacityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(0), | |
dtypes.NewInt(0), | |
}, | |
expectedCapacityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(31_944_444), // recovered by 1.15/3600 * 100k ~= 31.94 | |
dtypes.NewInt(13_310_185), // recovered by 1.15/86400 * 1M ~= 13.31 | |
}, | |
}, | |
"Prev capacity = 0, capacity_diff rounds down": { | |
bigTvl: big.NewInt(1_000_000_000_000), // 1M token (assuming 6 decimals) | |
limitParams: types.LimitParams{ | |
Denom: testDenom, | |
Limiters: []types.Limiter{ | |
// baseline = baseline minimum = 100k tokens | |
{ | |
Period: 3_600 * time.Second, | |
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals) | |
BaselineTvlPpm: 10_000, // 1% | |
}, | |
}, | |
}, | |
timeSinceLastBlock: 12 * time.Second, // 12 second | |
prevCapacityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(0), | |
}, | |
expectedCapacityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(333_333_333), // recovered by 12/3600 * 100k ~= 333.333 | |
}, | |
}, | |
"Prev capacity = 2 * baseline, capacity_diff rounds down": { | |
bigTvl: big.NewInt(1_000_000_000_000), // 1M token (assuming 6 decimals) | |
limitParams: types.LimitParams{ | |
Denom: testDenom, | |
Limiters: []types.Limiter{ | |
// baseline = baseline minimum = 100k tokens | |
{ | |
Period: 3_600 * time.Second, | |
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals) | |
BaselineTvlPpm: 10_000, // 1% | |
}, | |
}, | |
}, | |
timeSinceLastBlock: 12 * time.Second, // 12 second | |
prevCapacityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(200_000_000_000), | |
}, | |
expectedCapacityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(199_666_666_667), // recovered by 12/3600 * 100k ~= 333.333 | |
}, | |
}, | |
"baseline < prev capacity < 2 * baseline": { | |
bigTvl: big.NewInt(20_000_000_000_000), // 20M token (assuming 6 decimals) | |
limitParams: types.LimitParams{ | |
Denom: testDenom, | |
Limiters: []types.Limiter{ | |
// baseline = 200k tokens | |
{ | |
Period: 3_600 * time.Second, | |
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals) | |
BaselineTvlPpm: 10_000, // 1% | |
}, | |
// baseline = 2M tokens | |
{ | |
Period: 86_400 * time.Second, | |
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals) | |
BaselineTvlPpm: 100_000, // 10% | |
}, | |
}, | |
}, | |
timeSinceLastBlock: time.Second + 150*time.Millisecond, // 1.15 second | |
prevCapacityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(329_000_000_000), | |
dtypes.NewInt(3_500_000_000_000), | |
}, | |
expectedCapacityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(328_936_111_112), // recovered by 1.15/3600 * 200k ~= 63.89 | |
dtypes.NewInt(3_499_973_379_630), // recovered by 1.15/86400 * 2M ~= 26.62 | |
}, | |
}, | |
"prev capacity > 2 * baseline + capacity < baseline": { | |
bigTvl: big.NewInt(20_000_000_000_000), // 20M token (assuming 6 decimals) | |
limitParams: types.LimitParams{ | |
Denom: testDenom, | |
Limiters: []types.Limiter{ | |
// baseline = 200k tokens | |
{ | |
Period: 3_600 * time.Second, | |
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals) | |
BaselineTvlPpm: 10_000, // 1% | |
}, | |
// baseline = 2M tokens | |
{ | |
Period: 86_400 * time.Second, | |
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals) | |
BaselineTvlPpm: 100_000, // 10% | |
}, | |
}, | |
}, | |
timeSinceLastBlock: time.Second + 150*time.Millisecond, // 1.15 second | |
prevCapacityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(629_000_000_000), // > 2 * baseline | |
dtypes.NewInt(1_200_000_000_000), // < baseline | |
}, | |
expectedCapacityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(628_862_958_334), // recovered by 1.15/3600 * (629k - 200k) ~= 137.04 | |
dtypes.NewInt(1_200_026_620_370), // recovered by 1.15/86400 * 2M ~= 26.62 | |
}, | |
}, | |
"Error: len(capacityList) != len(limiters)": { | |
bigTvl: big.NewInt(25_000_000_000_000), // 25M token (assuming 6 decimals) | |
limitParams: types.LimitParams{ | |
Denom: testDenom, | |
Limiters: []types.Limiter{ | |
// baseline = 25M * 1% = 250k tokens | |
{ | |
Period: 3_600 * time.Second, | |
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals) | |
BaselineTvlPpm: 10_000, // 1% | |
}, | |
// baseline = 25M * 10% = 2.5M tokens | |
{ | |
Period: 86_400 * time.Second, | |
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals) | |
BaselineTvlPpm: 100_000, // 10% | |
}, | |
}, | |
}, | |
timeSinceLastBlock: time.Second + 90*time.Millisecond, // 1.09 second | |
prevCapacityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(99_000_000_000), | |
dtypes.NewInt(990_000_000_000), | |
dtypes.NewInt(0), | |
}, | |
expectedErr: errorsmod.Wrapf( | |
types.ErrMismatchedCapacityLimitersLength, | |
"denom = %v, len(limiters) = %v, len(prevCapacityList) = %v", | |
testDenom, | |
2, | |
3, | |
), | |
}, | |
} | |
// Run tests. | |
for name, tc := range tests { | |
t.Run(name, func(t *testing.T) { | |
newCapacityList, err := util.CalculateNewCapacityList( | |
tc.bigTvl, | |
tc.limitParams, | |
tc.prevCapacityList, | |
tc.timeSinceLastBlock, | |
) | |
if tc.expectedErr != nil { | |
require.Error(t, tc.expectedErr, err) | |
return | |
} | |
require.NoError(t, err) | |
require.Equal(t, | |
tc.expectedCapacityList, | |
newCapacityList, | |
) | |
}) | |
} | |
} |
}, | ||
}, | ||
timeSinceLastBlock: 12 * time.Second, // 12 second | ||
prevCapapcityList: []dtypes.SerializableInt{ | ||
dtypes.NewInt(200_000_000_000), | ||
}, | ||
expectedCapacityList: []dtypes.SerializableInt{ | ||
dtypes.NewInt(199_666_666_667), // recovered by 12/3600 * 100k ~= 333.333 | ||
}, | ||
}, | ||
"baseline < prev capacity < 2 * baseline": { | ||
bigTvl: big.NewInt(20_000_000_000_000), // 20M token (assuming 6 decimals) | ||
limitParams: types.LimitParams{ | ||
Denom: testDenom, | ||
Limiters: []types.Limiter{ | ||
// baseline = 200k tokens | ||
{ | ||
Period: 3_600 * time.Second, | ||
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals) | ||
BaselineTvlPpm: 10_000, // 1% | ||
}, | ||
// baseline = 2M tokens | ||
{ | ||
Period: 86_400 * time.Second, | ||
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals) | ||
BaselineTvlPpm: 100_000, // 10% | ||
}, | ||
}, | ||
}, | ||
timeSinceLastBlock: time.Second + 150*time.Millisecond, // 1.15 second | ||
prevCapapcityList: []dtypes.SerializableInt{ | ||
dtypes.NewInt(329_000_000_000), | ||
dtypes.NewInt(3_500_000_000_000), | ||
}, | ||
expectedCapacityList: []dtypes.SerializableInt{ | ||
dtypes.NewInt(328_936_111_112), // recovered by 1.15/3600 * 200k ~= 63.89 | ||
dtypes.NewInt(3_499_973_379_630), // recovered by 1.15/86400 * 2M ~= 26.62 | ||
}, | ||
}, | ||
"prev capacity > 2 * baseline + capacity < baseline": { | ||
bigTvl: big.NewInt(20_000_000_000_000), // 20M token (assuming 6 decimals) | ||
limitParams: types.LimitParams{ | ||
Denom: testDenom, | ||
Limiters: []types.Limiter{ | ||
// baseline = 200k tokens | ||
{ | ||
Period: 3_600 * time.Second, | ||
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals) | ||
BaselineTvlPpm: 10_000, // 1% | ||
}, | ||
// baseline = 2M tokens | ||
{ | ||
Period: 86_400 * time.Second, | ||
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals) | ||
BaselineTvlPpm: 100_000, // 10% | ||
}, | ||
}, | ||
}, | ||
timeSinceLastBlock: time.Second + 150*time.Millisecond, // 1.15 second | ||
prevCapapcityList: []dtypes.SerializableInt{ | ||
dtypes.NewInt(629_000_000_000), // > 2 * baseline | ||
dtypes.NewInt(1_200_000_000_000), // < baseline | ||
}, | ||
expectedCapacityList: []dtypes.SerializableInt{ | ||
dtypes.NewInt(628_862_958_334), // recovered by 1.15/3600 * (629k - 200k) ~= 137.04 | ||
dtypes.NewInt(1_200_026_620_370), // recovered by 1.15/86400 * 2M ~= 26.62 | ||
}, | ||
}, | ||
"Error: len(capacityList) != len(limiters)": { | ||
bigTvl: big.NewInt(25_000_000_000_000), // 25M token (assuming 6 decimals) | ||
limitParams: types.LimitParams{ | ||
Denom: testDenom, | ||
Limiters: []types.Limiter{ | ||
// baseline = 25M * 1% = 250k tokens | ||
{ | ||
Period: 3_600 * time.Second, | ||
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals) | ||
BaselineTvlPpm: 10_000, // 1% | ||
}, | ||
// baseline = 25M * 10% = 2.5M tokens | ||
{ | ||
Period: 86_400 * time.Second, | ||
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals) | ||
BaselineTvlPpm: 100_000, // 10% | ||
}, | ||
}, | ||
}, | ||
timeSinceLastBlock: time.Second + 90*time.Millisecond, // 1.09 second | ||
prevCapapcityList: []dtypes.SerializableInt{ | ||
dtypes.NewInt(99_000_000_000), | ||
dtypes.NewInt(990_000_000_000), | ||
dtypes.NewInt(0), | ||
}, | ||
expectedErr: errorsmod.Wrapf( | ||
types.ErrMismatchedCapacityLimitersLength, | ||
"denom = %v, len(limiters) = %v, len(prevCapacityList) = %v", | ||
testDenom, | ||
2, | ||
3, | ||
), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The test cases are well-structured and seem logically correct. However, there is a repeated typo in the field name prevCapapcityList
across multiple test cases. This should be corrected to prevCapacityList
.
- prevCapapcityList
+ prevCapacityList
Committable suggestion
❗ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
tests := map[string]struct { | |
bigTvl *big.Int | |
limitParams types.LimitParams | |
prevCapapcityList []dtypes.SerializableInt | |
expectedCapacityList []dtypes.SerializableInt | |
timeSinceLastBlock time.Duration | |
expectedErr error | |
}{ | |
"Prev capacity equals baseline": { | |
bigTvl: big.NewInt(25_000_000_000_000), // 25M token (assuming 6 decimals) | |
limitParams: types.LimitParams{ | |
Denom: testDenom, | |
Limiters: []types.Limiter{ | |
// baseline = 25M * 1% = 250k tokens | |
{ | |
Period: 3_600 * time.Second, | |
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals) | |
BaselineTvlPpm: 10_000, // 1% | |
}, | |
// baseline = 25M * 10% = 2.5M tokens | |
{ | |
Period: 86_400 * time.Second, | |
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals) | |
BaselineTvlPpm: 100_000, // 10% | |
}, | |
}, | |
}, | |
timeSinceLastBlock: time.Second, | |
prevCapapcityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(250_000_000_000), // 250k tokens, which equals baseline | |
dtypes.NewInt(2_500_000_000_000), // 2.5M tokens, which equals baseline | |
}, | |
expectedCapacityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(250_000_000_000), // 250k tokens | |
dtypes.NewInt(2_500_000_000_000), // 2.5M tokens | |
}, | |
}, | |
"Prev capacity < baseline": { | |
bigTvl: big.NewInt(25_000_000_000_000), // 25M token (assuming 6 decimals) | |
limitParams: types.LimitParams{ | |
Denom: testDenom, | |
Limiters: []types.Limiter{ | |
// baseline = 25M * 1% = 250k tokens | |
{ | |
Period: 3_600 * time.Second, | |
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals) | |
BaselineTvlPpm: 10_000, // 1% | |
}, | |
// baseline = 25M * 10% = 2.5M tokens | |
{ | |
Period: 86_400 * time.Second, | |
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals) | |
BaselineTvlPpm: 100_000, // 10% | |
}, | |
}, | |
}, | |
timeSinceLastBlock: time.Second + 90*time.Millisecond, // 1.09 second | |
prevCapapcityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(99_000_000_000), // 99k tokens, < baseline (250k) | |
dtypes.NewInt(990_000_000_000), // 0.99M tokens, < baseline (2.5M) | |
}, | |
expectedCapacityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(99_075_694_444), // recovered by 1.09/3600 * 250k = 75.694444 tokens | |
dtypes.NewInt(990_031_539_351), // recovered by 1.09/86400 * 2.5M = 31.539 tokens | |
}, | |
}, | |
"prev capacity < baseline, 18 decimals": { | |
bigTvl: big_testutil.Int64MulPow10(25, 24), // 25M tokens | |
limitParams: types.LimitParams{ | |
Denom: testDenom, | |
Limiters: []types.Limiter{ | |
// baseline = 25M * 1% = 250k tokens | |
{ | |
Period: 3_600 * time.Second, | |
BaselineMinimum: dtypes.NewIntFromBigInt( | |
big_testutil.Int64MulPow10(100_000, 18), // 100k tokens(assuming 18 decimals) | |
), | |
BaselineTvlPpm: 10_000, // 1% | |
}, | |
// baseline = 25M * 10% = 2.5M tokens | |
{ | |
Period: 86_400 * time.Second, | |
BaselineMinimum: dtypes.NewIntFromBigInt( | |
big_testutil.Int64MulPow10(1_000_000, 18), // 1M tokens(assuming 18 decimals) | |
), | |
BaselineTvlPpm: 100_000, // 10% | |
}, | |
}, | |
}, | |
timeSinceLastBlock: time.Second, | |
prevCapapcityList: []dtypes.SerializableInt{ | |
dtypes.NewIntFromBigInt( | |
big_testutil.Int64MulPow10(99_000, 18), | |
), // 99k tokens < baseline (250k) | |
dtypes.NewIntFromBigInt( | |
big_testutil.Int64MulPow10(990_000, 18), | |
), // 0.99M tokens, < baseline (2.5M) | |
}, | |
expectedCapacityList: []dtypes.SerializableInt{ | |
dtypes.NewIntFromBigInt( | |
big_testutil.MustFirst(new(big.Int).SetString("99069444444444444444444", 10)), | |
), // recovered by 1/3600 * 250k ~= 69.4444 tokens | |
dtypes.NewIntFromBigInt( | |
big_testutil.MustFirst(new(big.Int).SetString("990028935185185185185185", 10)), | |
), // recovered by 1/86400 * 2.5M ~= 28.9351 tokens | |
}, | |
}, | |
"Prev capacity = 0": { | |
bigTvl: big.NewInt(1_000_000_000_000), // 1M token (assuming 6 decimals) | |
limitParams: types.LimitParams{ | |
Denom: testDenom, | |
Limiters: []types.Limiter{ | |
// baseline = baseline minimum = 100k tokens | |
{ | |
Period: 3_600 * time.Second, | |
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals) | |
BaselineTvlPpm: 10_000, // 1% | |
}, | |
// baseline = baseline minimum = 1M tokens | |
{ | |
Period: 86_400 * time.Second, | |
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals) | |
BaselineTvlPpm: 100_000, // 10% | |
}, | |
}, | |
}, | |
timeSinceLastBlock: time.Second + 150*time.Millisecond, // 1.15 second | |
prevCapapcityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(0), | |
dtypes.NewInt(0), | |
}, | |
expectedCapacityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(31_944_444), // recovered by 1.15/3600 * 100k ~= 31.94 | |
dtypes.NewInt(13_310_185), // recovered by 1.15/86400 * 1M ~= 13.31 | |
}, | |
}, | |
"Prev capacity = 0, capacity_diff rounds down": { | |
bigTvl: big.NewInt(1_000_000_000_000), // 1M token (assuming 6 decimals) | |
limitParams: types.LimitParams{ | |
Denom: testDenom, | |
Limiters: []types.Limiter{ | |
// baseline = baseline minimum = 100k tokens | |
{ | |
Period: 3_600 * time.Second, | |
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals) | |
BaselineTvlPpm: 10_000, // 1% | |
}, | |
}, | |
}, | |
timeSinceLastBlock: 12 * time.Second, // 12 second | |
prevCapapcityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(0), | |
}, | |
expectedCapacityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(333_333_333), // recovered by 12/3600 * 100k ~= 333.333 | |
}, | |
}, | |
"Prev capacity = 2 * baseline, capacity_diff rounds down": { | |
bigTvl: big.NewInt(1_000_000_000_000), // 1M token (assuming 6 decimals) | |
limitParams: types.LimitParams{ | |
Denom: testDenom, | |
Limiters: []types.Limiter{ | |
// baseline = baseline minimum = 100k tokens | |
{ | |
Period: 3_600 * time.Second, | |
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals) | |
BaselineTvlPpm: 10_000, // 1% | |
}, | |
}, | |
}, | |
timeSinceLastBlock: 12 * time.Second, // 12 second | |
prevCapapcityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(200_000_000_000), | |
}, | |
expectedCapacityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(199_666_666_667), // recovered by 12/3600 * 100k ~= 333.333 | |
}, | |
}, | |
"baseline < prev capacity < 2 * baseline": { | |
bigTvl: big.NewInt(20_000_000_000_000), // 20M token (assuming 6 decimals) | |
limitParams: types.LimitParams{ | |
Denom: testDenom, | |
Limiters: []types.Limiter{ | |
// baseline = 200k tokens | |
{ | |
Period: 3_600 * time.Second, | |
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals) | |
BaselineTvlPpm: 10_000, // 1% | |
}, | |
// baseline = 2M tokens | |
{ | |
Period: 86_400 * time.Second, | |
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals) | |
BaselineTvlPpm: 100_000, // 10% | |
}, | |
}, | |
}, | |
timeSinceLastBlock: time.Second + 150*time.Millisecond, // 1.15 second | |
prevCapapcityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(329_000_000_000), | |
dtypes.NewInt(3_500_000_000_000), | |
}, | |
expectedCapacityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(328_936_111_112), // recovered by 1.15/3600 * 200k ~= 63.89 | |
dtypes.NewInt(3_499_973_379_630), // recovered by 1.15/86400 * 2M ~= 26.62 | |
}, | |
}, | |
"prev capacity > 2 * baseline + capacity < baseline": { | |
bigTvl: big.NewInt(20_000_000_000_000), // 20M token (assuming 6 decimals) | |
limitParams: types.LimitParams{ | |
Denom: testDenom, | |
Limiters: []types.Limiter{ | |
// baseline = 200k tokens | |
{ | |
Period: 3_600 * time.Second, | |
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals) | |
BaselineTvlPpm: 10_000, // 1% | |
}, | |
// baseline = 2M tokens | |
{ | |
Period: 86_400 * time.Second, | |
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals) | |
BaselineTvlPpm: 100_000, // 10% | |
}, | |
}, | |
}, | |
timeSinceLastBlock: time.Second + 150*time.Millisecond, // 1.15 second | |
prevCapapcityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(629_000_000_000), // > 2 * baseline | |
dtypes.NewInt(1_200_000_000_000), // < baseline | |
}, | |
expectedCapacityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(628_862_958_334), // recovered by 1.15/3600 * (629k - 200k) ~= 137.04 | |
dtypes.NewInt(1_200_026_620_370), // recovered by 1.15/86400 * 2M ~= 26.62 | |
}, | |
}, | |
"Error: len(capacityList) != len(limiters)": { | |
bigTvl: big.NewInt(25_000_000_000_000), // 25M token (assuming 6 decimals) | |
limitParams: types.LimitParams{ | |
Denom: testDenom, | |
Limiters: []types.Limiter{ | |
// baseline = 25M * 1% = 250k tokens | |
{ | |
Period: 3_600 * time.Second, | |
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals) | |
BaselineTvlPpm: 10_000, // 1% | |
}, | |
// baseline = 25M * 10% = 2.5M tokens | |
{ | |
Period: 86_400 * time.Second, | |
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals) | |
BaselineTvlPpm: 100_000, // 10% | |
}, | |
}, | |
}, | |
timeSinceLastBlock: time.Second + 90*time.Millisecond, // 1.09 second | |
prevCapapcityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(99_000_000_000), | |
dtypes.NewInt(990_000_000_000), | |
dtypes.NewInt(0), | |
}, | |
expectedErr: errorsmod.Wrapf( | |
types.ErrMismatchedCapacityLimitersLength, | |
"denom = %v, len(limiters) = %v, len(prevCapacityList) = %v", | |
testDenom, | |
2, | |
3, | |
), | |
tests := map[string]struct { | |
bigTvl *big.Int | |
limitParams types.LimitParams | |
prevCapacityList []dtypes.SerializableInt | |
expectedCapacityList []dtypes.SerializableInt | |
timeSinceLastBlock time.Duration | |
expectedErr error | |
}{ | |
"Prev capacity equals baseline": { | |
bigTvl: big.NewInt(25_000_000_000_000), // 25M token (assuming 6 decimals) | |
limitParams: types.LimitParams{ | |
Denom: testDenom, | |
Limiters: []types.Limiter{ | |
// baseline = 25M * 1% = 250k tokens | |
{ | |
Period: 3_600 * time.Second, | |
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals) | |
BaselineTvlPpm: 10_000, // 1% | |
}, | |
// baseline = 25M * 10% = 2.5M tokens | |
{ | |
Period: 86_400 * time.Second, | |
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals) | |
BaselineTvlPpm: 100_000, // 10% | |
}, | |
}, | |
}, | |
timeSinceLastBlock: time.Second, | |
prevCapacityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(250_000_000_000), // 250k tokens, which equals baseline | |
dtypes.NewInt(2_500_000_000_000), // 2.5M tokens, which equals baseline | |
}, | |
expectedCapacityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(250_000_000_000), // 250k tokens | |
dtypes.NewInt(2_500_000_000_000), // 2.5M tokens | |
}, | |
}, | |
"Prev capacity < baseline": { | |
bigTvl: big.NewInt(25_000_000_000_000), // 25M token (assuming 6 decimals) | |
limitParams: types.LimitParams{ | |
Denom: testDenom, | |
Limiters: []types.Limiter{ | |
// baseline = 25M * 1% = 250k tokens | |
{ | |
Period: 3_600 * time.Second, | |
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals) | |
BaselineTvlPpm: 10_000, // 1% | |
}, | |
// baseline = 25M * 10% = 2.5M tokens | |
{ | |
Period: 86_400 * time.Second, | |
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals) | |
BaselineTvlPpm: 100_000, // 10% | |
}, | |
}, | |
}, | |
timeSinceLastBlock: time.Second + 90*time.Millisecond, // 1.09 second | |
prevCapacityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(99_000_000_000), // 99k tokens, < baseline (250k) | |
dtypes.NewInt(990_000_000_000), // 0.99M tokens, < baseline (2.5M) | |
}, | |
expectedCapacityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(99_075_694_444), // recovered by 1.09/3600 * 250k = 75.694444 tokens | |
dtypes.NewInt(990_031_539_351), // recovered by 1.09/86400 * 2.5M = 31.539 tokens | |
}, | |
}, | |
"prev capacity < baseline, 18 decimals": { | |
bigTvl: big_testutil.Int64MulPow10(25, 24), // 25M tokens | |
limitParams: types.LimitParams{ | |
Denom: testDenom, | |
Limiters: []types.Limiter{ | |
// baseline = 25M * 1% = 250k tokens | |
{ | |
Period: 3_600 * time.Second, | |
BaselineMinimum: dtypes.NewIntFromBigInt( | |
big_testutil.Int64MulPow10(100_000, 18), // 100k tokens(assuming 18 decimals) | |
), | |
BaselineTvlPpm: 10_000, // 1% | |
}, | |
// baseline = 25M * 10% = 2.5M tokens | |
{ | |
Period: 86_400 * time.Second, | |
BaselineMinimum: dtypes.NewIntFromBigInt( | |
big_testutil.Int64MulPow10(1_000_000, 18), // 1M tokens(assuming 18 decimals) | |
), | |
BaselineTvlPpm: 100_000, // 10% | |
}, | |
}, | |
}, | |
timeSinceLastBlock: time.Second, | |
prevCapacityList: []dtypes.SerializableInt{ | |
dtypes.NewIntFromBigInt( | |
big_testutil.Int64MulPow10(99_000, 18), | |
), // 99k tokens < baseline (250k) | |
dtypes.NewIntFromBigInt( | |
big_testutil.Int64MulPow10(990_000, 18), | |
), // 0.99M tokens, < baseline (2.5M) | |
}, | |
expectedCapacityList: []dtypes.SerializableInt{ | |
dtypes.NewIntFromBigInt( | |
big_testutil.MustFirst(new(big.Int).SetString("99069444444444444444444", 10)), | |
), // recovered by 1/3600 * 250k ~= 69.4444 tokens | |
dtypes.NewIntFromBigInt( | |
big_testutil.MustFirst(new(big.Int).SetString("990028935185185185185185", 10)), | |
), // recovered by 1/86400 * 2.5M ~= 28.9351 tokens | |
}, | |
}, | |
"Prev capacity = 0": { | |
bigTvl: big.NewInt(1_000_000_000_000), // 1M token (assuming 6 decimals) | |
limitParams: types.LimitParams{ | |
Denom: testDenom, | |
Limiters: []types.Limiter{ | |
// baseline = baseline minimum = 100k tokens | |
{ | |
Period: 3_600 * time.Second, | |
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals) | |
BaselineTvlPpm: 10_000, // 1% | |
}, | |
// baseline = baseline minimum = 1M tokens | |
{ | |
Period: 86_400 * time.Second, | |
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals) | |
BaselineTvlPpm: 100_000, // 10% | |
}, | |
}, | |
}, | |
timeSinceLastBlock: time.Second + 150*time.Millisecond, // 1.15 second | |
prevCapacityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(0), | |
dtypes.NewInt(0), | |
}, | |
expectedCapacityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(31_944_444), // recovered by 1.15/3600 * 100k ~= 31.94 | |
dtypes.NewInt(13_310_185), // recovered by 1.15/86400 * 1M ~= 13.31 | |
}, | |
}, | |
"Prev capacity = 0, capacity_diff rounds down": { | |
bigTvl: big.NewInt(1_000_000_000_000), // 1M token (assuming 6 decimals) | |
limitParams: types.LimitParams{ | |
Denom: testDenom, | |
Limiters: []types.Limiter{ | |
// baseline = baseline minimum = 100k tokens | |
{ | |
Period: 3_600 * time.Second, | |
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals) | |
BaselineTvlPpm: 10_000, // 1% | |
}, | |
}, | |
}, | |
timeSinceLastBlock: 12 * time.Second, // 12 second | |
prevCapacityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(0), | |
}, | |
expectedCapacityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(333_333_333), // recovered by 12/3600 * 100k ~= 333.333 | |
}, | |
}, | |
"Prev capacity = 2 * baseline, capacity_diff rounds down": { | |
bigTvl: big.NewInt(1_000_000_000_000), // 1M token (assuming 6 decimals) | |
limitParams: types.LimitParams{ | |
Denom: testDenom, | |
Limiters: []types.Limiter{ | |
// baseline = baseline minimum = 100k tokens | |
{ | |
Period: 3_600 * time.Second, | |
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals) | |
BaselineTvlPpm: 10_000, // 1% | |
}, | |
}, | |
}, | |
timeSinceLastBlock: 12 * time.Second, // 12 second | |
prevCapacityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(200_000_000_000), | |
}, | |
expectedCapacityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(199_666_666_667), // recovered by 12/3600 * 100k ~= 333.333 | |
}, | |
}, | |
"baseline < prev capacity < 2 * baseline": { | |
bigTvl: big.NewInt(20_000_000_000_000), // 20M token (assuming 6 decimals) | |
limitParams: types.LimitParams{ | |
Denom: testDenom, | |
Limiters: []types.Limiter{ | |
// baseline = 200k tokens | |
{ | |
Period: 3_600 * time.Second, | |
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals) | |
BaselineTvlPpm: 10_000, // 1% | |
}, | |
// baseline = 2M tokens | |
{ | |
Period: 86_400 * time.Second, | |
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals) | |
BaselineTvlPpm: 100_000, // 10% | |
}, | |
}, | |
}, | |
timeSinceLastBlock: time.Second + 150*time.Millisecond, // 1.15 second | |
prevCapacityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(329_000_000_000), | |
dtypes.NewInt(3_500_000_000_000), | |
}, | |
expectedCapacityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(328_936_111_112), // recovered by 1.15/3600 * 200k ~= 63.89 | |
dtypes.NewInt(3_499_973_379_630), // recovered by 1.15/86400 * 2M ~= 26.62 | |
}, | |
}, | |
"prev capacity > 2 * baseline + capacity < baseline": { | |
bigTvl: big.NewInt(20_000_000_000_000), // 20M token (assuming 6 decimals) | |
limitParams: types.LimitParams{ | |
Denom: testDenom, | |
Limiters: []types.Limiter{ | |
// baseline = 200k tokens | |
{ | |
Period: 3_600 * time.Second, | |
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals) | |
BaselineTvlPpm: 10_000, // 1% | |
}, | |
// baseline = 2M tokens | |
{ | |
Period: 86_400 * time.Second, | |
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals) | |
BaselineTvlPpm: 100_000, // 10% | |
}, | |
}, | |
}, | |
timeSinceLastBlock: time.Second + 150*time.Millisecond, // 1.15 second | |
prevCapacityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(629_000_000_000), // > 2 * baseline | |
dtypes.NewInt(1_200_000_000_000), // < baseline | |
}, | |
expectedCapacityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(628_862_958_334), // recovered by 1.15/3600 * (629k - 200k) ~= 137.04 | |
dtypes.NewInt(1_200_026_620_370), // recovered by 1.15/86400 * 2M ~= 26.62 | |
}, | |
}, | |
"Error: len(capacityList) != len(limiters)": { | |
bigTvl: big.NewInt(25_000_000_000_000), // 25M token (assuming 6 decimals) | |
limitParams: types.LimitParams{ | |
Denom: testDenom, | |
Limiters: []types.Limiter{ | |
// baseline = 25M * 1% = 250k tokens | |
{ | |
Period: 3_600 * time.Second, | |
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals) | |
BaselineTvlPpm: 10_000, // 1% | |
}, | |
// baseline = 25M * 10% = 2.5M tokens | |
{ | |
Period: 86_400 * time.Second, | |
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals) | |
BaselineTvlPpm: 100_000, // 10% | |
}, | |
}, | |
}, | |
timeSinceLastBlock: time.Second + 90*time.Millisecond, // 1.09 second | |
prevCapacityList: []dtypes.SerializableInt{ | |
dtypes.NewInt(99_000_000_000), | |
dtypes.NewInt(990_000_000_000), | |
dtypes.NewInt(0), | |
}, | |
expectedErr: errorsmod.Wrapf( | |
types.ErrMismatchedCapacityLimitersLength, | |
"denom = %v, len(limiters) = %v, len(prevCapacityList) = %v", | |
testDenom, | |
2, | |
3, | |
), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 0
Configuration used: CodeRabbit UI
Files selected for processing (1)
- protocol/x/ratelimit/util/capacity_test.go (1 hunks)
Files skipped from review as they are similar to previous changes (1)
- protocol/x/ratelimit/util/capacity_test.go
Changelist
This PR implements the capacity update logic in the
x/ratelimit
EndBlocker. See more context in tech specTest Plan
Unit tests
Author/Reviewer Checklist
state-breaking
label.indexer-postgres-breaking
label.PrepareProposal
orProcessProposal
, manually add the labelproposal-breaking
.feature:[feature-name]
.backport/[branch-name]
.refactor
,chore
,bug
.