Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CORE-824] x/ratelimit: Implement UpdateCapacityEndBlocker #941

Merged
merged 6 commits into from
Jan 12, 2024

Conversation

teddyding
Copy link
Contributor

@teddyding teddyding commented Jan 9, 2024

Changelist

This PR implements the capacity update logic in the x/ratelimit EndBlocker. See more context in tech spec

Test Plan

Unit tests

Author/Reviewer Checklist

  • If this PR has changes that result in a different app state given the same prior state and transaction list, manually add the state-breaking label.
  • If the PR has breaking postgres changes to the indexer add the indexer-postgres-breaking label.
  • If this PR isn't state-breaking but has changes that modify behavior in PrepareProposal or ProcessProposal, manually add the label proposal-breaking.
  • If this PR is one of many that implement a specific feature, manually label them all feature:[feature-name].
  • If you wish to for mergify-bot to automatically create a PR to backport your change to a release branch, manually add the label backport/[branch-name].
  • Manually add any of the following labels: refactor, chore, bug.

Copy link

linear bot commented Jan 9, 2024

Copy link
Contributor

coderabbitai bot commented Jan 9, 2024

Walkthrough

The codebase has undergone a series of updates to integrate block time tracking into the rate limiting mechanism. The BlockTimeKeeper is now directly utilized within the RatelimitKeeper, with new constants added for rate limiting metrics. The Keeper struct and its constructor have been adjusted to include the blockTimeKeeper. Additionally, functions for updating capacity based on block time and limit parameters have been introduced. The end block logic in the module has been updated to reflect these changes, and new interfaces and tests ensure compatibility and correctness.

Changes

File Path Change Summary
protocol/app/app.go, protocol/x/ratelimit/keeper/keeper.go, protocol/x/blocktime/keeper/keeper.go Modified to directly use BlockTimeKeeper within RatelimitKeeper and BlockTimeKeeper within BlocktimeKeeper. Added new dependencies and adjusted functions.
protocol/lib/metrics/constants.go Added constants for rate limiting.
protocol/x/ratelimit/keeper/keeper_test.go Updated tests to reflect changes in Keeper and capacity management logic.
protocol/x/ratelimit/module.go Updated EndBlock function to include capacity updating logic.
protocol/x/ratelimit/types/expected_keepers.go Added BlockTimeKeeper interface for simulations.
protocol/x/ratelimit/util/baseline.go, protocol/x/ratelimit/util/capacity.go Introduced new files and functions for capacity and baseline calculation.
indexer/packages/v4-protos/src/codegen/dydxprotocol/ratelimit/limit_params.ts, indexer/packages/v4-protos/src/codegen/dydxprotocol/ratelimit/query.ts, proto/dydxprotocol/ratelimit/limit_params.proto, proto/dydxprotocol/ratelimit/query.proto Updated protocol files to handle duration and period changes.
protocol/x/ratelimit/types/errors.go Added a new error ErrMismatchedCapacityLimitersLength.

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share

Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit-tests for this file.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit tests for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository from git and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit tests.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger a review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai help to get help.

Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Configration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • The JSON schema for the configuration file is available here.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/coderabbit-overrides.v2.json

CodeRabbit Discord Community

Join our Discord Community to get help, request features, and share feedback.

@@ -588,10 +588,22 @@ func New(
transferModule := transfer.NewAppModule(app.TransferKeeper)
transferIBCModule := transfer.NewIBCModule(app.TransferKeeper)

app.BlockTimeKeeper = *blocktimemodulekeeper.NewKeeper(
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This code block is moved from below to before the app.RateLimitKeeper initialization.

@@ -122,17 +128,13 @@ func (k Keeper) ProcessDeposit(
// baseline = max(baseline_minimum, baseline_tvl_ppm * current_tvl)
func (k Keeper) GetBaseline(
ctx sdk.Context,
denom string,
currentSupply *big.Int,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Change function signature, because whereever GetBaseline is called for a limiter, that caller function could've called bankKeeper.GetSupply() for all limiters of a denom.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Two questions:

  1. for my understanding, is the baseline the same across all denoms? what's the reasoning for this?
  2. nit: should we either call this currentTVL to match the formula in the comment or update the comment formula?

Copy link
Contributor Author

@teddyding teddyding Jan 9, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. For my understanding, is the baseline the same across all denoms? what's the reasoning for this?

The baseline is not the same across denoms. It is a function of LimitParams and TVL (currentSupply). See definition here

nit: should we either call this currentTVL to match the formula in the comment or update the comment formula?

Updated terminology to TVL for consistency

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Status

Actionable comments generated: 0

Configuration used: CodeRabbit UI

Commits Files that changed from the base of the PR and between c829c1d and 5efe377.
Files selected for processing (6)
  • protocol/app/app.go (2 hunks)
  • protocol/lib/metrics/constants.go (1 hunks)
  • protocol/x/ratelimit/keeper/keeper.go (5 hunks)
  • protocol/x/ratelimit/keeper/keeper_test.go (3 hunks)
  • protocol/x/ratelimit/module.go (2 hunks)
  • protocol/x/ratelimit/types/expected_keepers.go (1 hunks)
Additional comments: 16
protocol/x/ratelimit/types/expected_keepers.go (1)
  • 15-17: The new BlockTimeKeeper interface with the GetPreviousBlockInfo method is correctly defined and follows Go's interface declaration best practices.
protocol/x/ratelimit/module.go (2)
  • 8-9: The addition of the cosmossdk.io/core/appmodule package to the import section is correct.

  • 155-156: The update to the EndBlock function to include a call to UpdateCapacityEndBlocker is correct, assuming that the UpdateCapacityEndBlocker method is implemented correctly and that error handling is managed upstream.

protocol/x/ratelimit/keeper/keeper.go (5)
  • 16-16: The addition of the github.com/dydxprotocol/v4-chain/protocol/lib/metrics import is correct.

  • 23-26: The addition of the blockTimeKeeper field to the Keeper struct is correct and follows Go's struct declaration best practices.

  • 39-47: The update to the NewKeeper function to accept a BlockTimeKeeper and initialize the Keeper struct with it is correct.

  • 224-239: The UpdateCapacityEndBlocker function correctly iterates through all limit params and calls UpdateCapacityForLimitParams for each. Ensure that the iteration and the subsequent updates are atomic and handle any potential errors appropriately.

  • 241-375: The UpdateCapacityForLimitParams function contains complex logic for updating capacity. Ensure that the mathematical operations are correct and that the time calculations are accurate and handle edge cases, such as negative time differences, appropriately.

protocol/lib/metrics/constants.go (3)
  • 402-402: The addition of the constant Capacity with the value "capacity" seems appropriate for the x/ratelimit module. Ensure that this constant is used consistently throughout the module and that its introduction does not conflict with any existing constants or variables.

  • 403-403: The constant RateLimitDenom with the value "rate_limit_denom" has been added. Verify that this constant is used in the context where denomination is required for rate limiting and that it aligns with the system's denomination conventions.

  • 404-404: The addition of LimiterIndex with the value "limiter_index" is noted. This constant should be used to reference a specific index in rate limiting structures or algorithms. Confirm that the usage of this constant is consistent and clear in its purpose within the rate limiting logic.

protocol/x/ratelimit/keeper/keeper_test.go (3)
  • 23-23: The addition of a new constant testDenom2 is noted. Ensure that this constant is used consistently throughout the test cases where applicable.

  • 278-342: The TestGetBaseline function has been updated with new test cases. It's important to ensure that the test cases cover a wide range of scenarios and that the expected outcomes are correct based on the logic of the GetBaseline function.

  • 275-865: > Note: This review was outside the patches, so it was mapped to the patch with the greatest overlap. Original lines [344-913]

The TestUpdateCapacityEndBlocker function has been significantly updated with new test cases. This function is critical as it tests the logic that updates the capacity at the end of each block. It's important to verify that the test cases cover all possible scenarios, including edge cases, and that the assertions correctly validate the expected behavior of the UpdateCapacityEndBlocker function.

  • Ensure that the test cases cover scenarios where the previous capacity is less than, equal to, and greater than the baseline.
  • Verify that the calculations for the expected capacity recovery are correct and that they adhere to the logic defined in the UpdateCapacityEndBlocker function.
  • Confirm that the test cases handle different denominations correctly, especially with the addition of testDenom2.
  • Check that the error cases, such as when the current block time is equal to the previous block time and when the lengths of limiters and capacity lists do not match, are properly tested and that the expected behavior is asserted.
Verification successful

The provided output from the script execution shows the full content of the TestUpdateCapacityEndBlocker function. The test cases within this function cover a variety of scenarios:

  1. "One denom, prev capacity equals baseline": Tests if the capacity remains the same when it is already at the baseline.
  2. "One denom, prev capacity < baseline": Tests the recovery of capacity when it is below the baseline.
  3. "One denom, prev capacity < baseline, 18 decimals": Similar to the previous test but with a different denomination precision.
  4. "One denom, prev capacity = 0": Tests the recovery from zero capacity.
  5. "One denom, baseline < prev capacity < 2 * baseline": Tests the recovery when the capacity is between the baseline and double the baseline.
  6. "One denom, prev capacity > 2 * baseline": Tests the recovery when the capacity is greater than double the baseline.
  7. "Two denoms, mix of values from above cases": Tests the recovery with two different denominations and various capacities.
  8. "(Error) one denom, current block time = prev block time, no changes applied": Tests the error case where the block times are the same.
  9. "(Error) one denom, len(limiters) != len(capacityList)": Tests the error case where the lengths of limiters and capacity lists do not match.

The test cases cover scenarios where the previous capacity is less than, equal to, and greater than the baseline. They also handle different denominations correctly, including the addition of testDenom2. The error cases are properly tested, with assertions checking for unchanged capacities when the block times are the same and handling mismatches in the lengths of limiters and capacity lists.

Based on the provided output, the review comment is consistent with the codebase, as the test cases and assertions align with the expectations outlined in the comment. The calculations for the expected capacity recovery are present and appear to be correct, adhering to the logic defined in the UpdateCapacityEndBlocker function.

Scripts Executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Verify that the test cases for UpdateCapacityEndBlocker cover all scenarios.
rg --type go "TestUpdateCapacityEndBlocker" protocol/x/ratelimit/keeper/keeper_test.go
# Verify that the calculations for expected capacity recovery are correct.
rg --type go "expectedDenomCapacityList" protocol/x/ratelimit/keeper/keeper_test.go
# Ensure that the error cases are properly tested.
rg --type go "(Error)" protocol/x/ratelimit/keeper/keeper_test.go

Length of output: 1031



Script:

#!/bin/bash
# Extract the content of the TestUpdateCapacityEndBlocker function to verify the test cases and calculations.
rg --type go -A 30 "func TestUpdateCapacityEndBlocker" protocol/x/ratelimit/keeper/keeper_test.go

Length of output: 1131



Script:

#!/bin/bash
# Extract the full content of the TestUpdateCapacityEndBlocker function to verify the test cases and calculations.
rg --type go -A 1000 "func TestUpdateCapacityEndBlocker" protocol/x/ratelimit/keeper/keeper_test.go

Length of output: 17802

protocol/app/app.go (2)
  • 591-606: The changes to app.BlockTimeKeeper and app.RatelimitKeeper initialization seem to align with the PR objectives and the AI-generated summary. The blockTimeModule initialization has been removed, and instead, app.BlockTimeKeeper is being directly used in the app.RatelimitKeeper declaration. This change should be carefully tested to ensure that the BlockTimeKeeper is correctly integrated into the RatelimitKeeper and that the removal of blockTimeModule initialization does not have unintended side effects elsewhere in the application.

  • 591-606: It's important to ensure that the new BlockTimeKeeper field in the Keeper struct is properly initialized everywhere it's used. This change could potentially affect various parts of the application, and all usages need to be consistent with the new initialization logic.

Copy link
Contributor

@ttl33 ttl33 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall lgtm. Will rely on Brendan to double check the math

@@ -122,17 +128,13 @@ func (k Keeper) ProcessDeposit(
// baseline = max(baseline_minimum, baseline_tvl_ppm * current_tvl)
func (k Keeper) GetBaseline(
ctx sdk.Context,
denom string,
currentSupply *big.Int,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Two questions:

  1. for my understanding, is the baseline the same across all denoms? what's the reasoning for this?
  2. nit: should we either call this currentTVL to match the formula in the comment or update the comment formula?

protocol/x/ratelimit/keeper/keeper.go Outdated Show resolved Hide resolved
protocol/x/ratelimit/keeper/keeper.go Outdated Show resolved Hide resolved
func (k Keeper) UpdateCapacityForLimitParams(
ctx sdk.Context,
limitParams types.LimitParams,
) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: this is a 100+ line logic method. Is there room to break this into smaller helper methods? For example:

  1. GetCapacity
  2. GetCapacityDiff
  3. CalcNewCapacity

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree this function is too large. Broke down according to @BrendanChou 's suggestion below!

// On a high level, `capacity` trends towards `baseline` by `capacity_diff` but does not “cross” it.
func (k Keeper) UpdateCapacityForLimitParams(
ctx sdk.Context,
limitParams types.LimitParams,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For safety, should we validate LimitParams and DenomCapacity values exist instead of assuming? Does it make sense to validate their values too?

Copy link
Contributor Author

@teddyding teddyding Jan 9, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general, I'm not a fan of passing values from function A to function B and then doing validations in function B. Instead the better pattern is to clearly document assumptions.

Similarly, LimitParams is an input to updateCapacityForLimitParams and therefore it is the responsibility of caller function to pass in correct value.

We actually do not assume DenomCapacity to exist in this function. If it does not, the GetDenomCapacity will return an empty list, causing the code to throw an error in if condition if len(capacityList) != len(limitParams.Limiters) . Updated comment to reflect this

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Update: checking this under in CalculateNewCapacityList

Comment on lines 279 to 285
blockTimeMilli := ctx.BlockTime().UnixMilli()
prevBlockInfo := k.blockTimeKeeper.GetPreviousBlockInfo(ctx)
prevBlockTimeMilli := prevBlockInfo.Timestamp.UnixMilli()
timeSinceLastBlockMilli := new(big.Int).Sub(
big.NewInt(blockTimeMilli),
big.NewInt(prevBlockTimeMilli),
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Feels like this should be a lib function

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could just be a keeper function on the blockTimeKeeper as well

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added blockTimeKeeper.GetTimeSinceLastBlock

// Calculate right operand: `time_since_last_block / period`
periodMilli := new(big.Int).Mul(
new(big.Int).SetUint64(uint64(limiter.PeriodSec)),
big.NewInt(1000),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: instead of using 1000 explicitly, would it make sense to use time lib?

duration := time.Duration(seconds) * time.Second
milliseconds := duration.Milliseconds()

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think either could work. Since we eventually need to cast to big.Int, I actually like the current way better, which doesn't give possibility for int overflow. Let me know if you disagree!

newCapacityList[i] = dtypes.NewIntFromBigInt(
new(big.Int).Add(
capacityList[i].BigInt(),
lib.BigRatRound(capacityDiff, false), // rounds down `capacity_diff`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: didnt look at the unit tests closely, can you confirm that we have unit tests that verifies the round down behaviors for this method?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes wo do!

Comment on lines 360 to 374
// Emit telemetry for the new capacity.
telemetry.SetGaugeWithLabels(
[]string{types.ModuleName, metrics.Capacity},
metrics.GetMetricValueFromBigInt(newCapacityList[i].BigInt()),
[]gometrics.Label{
metrics.GetLabelForStringValue(metrics.RateLimitDenom, limitParams.Denom),
metrics.GetLabelForIntValue(metrics.LimiterIndex, i),
},
)
}

k.SetDenomCapacity(ctx, types.DenomCapacity{
Denom: limitParams.Denom,
CapacityList: newCapacityList,
})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of having a function that:

  • Determines the new value
  • Emits telemetry
  • Writes the new value to state

I suggest the following to be more clear, modular, and unit-testable:

  • One function to determine the new value, but not update it in state (call the state update outside of this function)
  • Move the telemetry to SetDenomCapacity such that it automatically does the telemetry on writes (and you don't have to have telemetry before every time you call SetDenomCapacity

protocol/x/ratelimit/keeper/keeper.go Outdated Show resolved Hide resolved
Comment on lines 279 to 285
blockTimeMilli := ctx.BlockTime().UnixMilli()
prevBlockInfo := k.blockTimeKeeper.GetPreviousBlockInfo(ctx)
prevBlockTimeMilli := prevBlockInfo.Timestamp.UnixMilli()
timeSinceLastBlockMilli := new(big.Int).Sub(
big.NewInt(blockTimeMilli),
big.NewInt(prevBlockTimeMilli),
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could just be a keeper function on the blockTimeKeeper as well

protocol/x/ratelimit/keeper/keeper.go Outdated Show resolved Hide resolved
protocol/x/ratelimit/keeper/keeper.go Outdated Show resolved Hide resolved
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Status

Actionable comments generated: 0

Configuration used: CodeRabbit UI

Commits Files that changed from the base of the PR and between 5efe377 and 9831552.
Files selected for processing (11)
  • protocol/x/blocktime/keeper/keeper.go (1 hunks)
  • protocol/x/blocktime/keeper/keeper_test.go (1 hunks)
  • protocol/x/blocktime/module.go (1 hunks)
  • protocol/x/ratelimit/keeper/keeper.go (5 hunks)
  • protocol/x/ratelimit/keeper/keeper_test.go (3 hunks)
  • protocol/x/ratelimit/module.go (2 hunks)
  • protocol/x/ratelimit/types/expected_keepers.go (1 hunks)
  • protocol/x/ratelimit/util/baseline.go (1 hunks)
  • protocol/x/ratelimit/util/baseline_test.go (1 hunks)
  • protocol/x/ratelimit/util/capacity.go (1 hunks)
  • protocol/x/ratelimit/util/capacity_test.go (1 hunks)
Files skipped from review due to trivial changes (1)
  • protocol/x/blocktime/module.go
Files skipped from review as they are similar to previous changes (3)
  • protocol/x/ratelimit/keeper/keeper_test.go
  • protocol/x/ratelimit/module.go
  • protocol/x/ratelimit/types/expected_keepers.go
Additional comments: 13
protocol/x/ratelimit/util/baseline.go (1)
  • 10-25: The implementation of GetBaseline function appears to be logically sound and correctly utilizes the lib.BigMax function to calculate the baseline. The code is clean, well-documented, and follows best practices.
protocol/x/ratelimit/util/baseline_test.go (1)
  • 13-74: The TestGetBaseline function is well-implemented with comprehensive test cases covering various scenarios. The assertions are correctly using require.Equal to ensure the expected behavior of the GetBaseline function.
protocol/x/ratelimit/util/capacity.go (1)
  • 12-94: The CalculateNewCapacityList function is complex but well-documented and appears to be correctly implemented. The use of big.Rat and big.Int for mathematical operations is appropriate. Ensure that this logic is covered by comprehensive unit tests to validate the correctness of the calculations.
protocol/x/blocktime/keeper/keeper.go (1)
  • 82-86: The addition of the GetTimeSinceLastBlock function is correctly implemented, with a clear function signature and appropriate logic to calculate the time difference between blocks using the Go time package.
protocol/x/blocktime/keeper/keeper_test.go (1)
  • 247-288: The TestGetTimeSinceLastBlock function is well-implemented with comprehensive test cases covering various scenarios, including realistic values. The assertions are correctly using require.Equal to ensure the expected behavior of the GetTimeSinceLastBlock function.
protocol/x/ratelimit/keeper/keeper.go (7)
  • 12-19: The addition of new imports github.com/cosmos/cosmos-sdk/telemetry and github.com/dydxprotocol/v4-chain/protocol/lib/metrics is noted. Ensure these packages are used appropriately in the code and that they do not introduce any security or performance issues.

  • 24-27: The Keeper struct has been updated with a new field blockTimeKeeper. This change aligns with the PR objectives to track block time within the rate limiting logic.

  • 40-48: The NewKeeper function has been modified to initialize the Keeper struct with the new blockTimeKeeper field. Ensure that all instances where NewKeeper is called have been updated to pass the new argument.

Verification successful

The search results from protocol/x/ratelimit/keeper/keeper.go and protocol/app/app.go show the declaration of the NewKeeper function in the ratelimit module and its instantiation in app.go. The instantiation line in app.go:

app.RatelimitKeeper = *ratelimitmodulekeeper.NewKeeper(

indicates that NewKeeper is being called, but the output does not show the full list of arguments being passed to the function. To verify that the blockTimeKeeper argument is included in the call, we need to see the complete line of code where NewKeeper is instantiated in app.go.

Since the output is truncated, we need to extract the full line to confirm that the blockTimeKeeper argument is being passed correctly. Let's generate a script to display the full line of the NewKeeper instantiation in app.go.


The extended context from app.go shows the instantiation of various module keepers, including the RatelimitKeeper. However, the output is still truncated and does not show the full list of arguments being passed to the NewKeeper function for the RatelimitKeeper. To ensure that the blockTimeKeeper argument is included, we need to see the complete instantiation line.

To resolve this, we will generate a script to display the full instantiation line of the RatelimitKeeper in app.go, ensuring we capture enough context to see all the arguments.


The output from the script provides the full instantiation line of the RatelimitKeeper in app.go, which includes the app.BlockTimeKeeper as an argument:

app.RatelimitKeeper = *ratelimitmodulekeeper.NewKeeper(
	appCodec,
	keys[ratelimitmoduletypes.StoreKey],
	app.BankKeeper,
	app.BlockTimeKeeper,
	// set the governance and delaymsg module accounts as the authority for conducting upgrades
	[]string{
		lib.GovModuleAddress.String(),
		delaymsgmoduletypes.ModuleAddress.String(),
	},
)

This confirms that the NewKeeper function in the x/ratelimit module is being called with the new blockTimeKeeper field as required by the updated function signature. Therefore, the review comment has been addressed correctly in the codebase.

Scripts Executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Search for instances where NewKeeper is called to ensure the new argument is being passed.
ast-grep --lang go --pattern $'NewKeeper($_, $_, $_, $_, $_)'

Length of output: 62



Script:

#!/bin/bash
# Search for instances where NewKeeper is called with any number of arguments to ensure the new argument is being passed.
ast-grep --lang go --pattern $'NewKeeper($$$)'

Length of output: 47



Script:

#!/bin/bash
# Use ripgrep to search for the string "NewKeeper" to find where it is called in the codebase.
rg 'NewKeeper'

Length of output: 3968



Script:

#!/bin/bash
# Extract the lines where NewKeeper is called in the protocol/x/ratelimit/keeper/keeper.go file and the app.go file.
rg 'NewKeeper' protocol/x/ratelimit/keeper/keeper.go
rg 'NewKeeper' protocol/app/app.go

Length of output: 1574



Script:

#!/bin/bash
# Display the full line of code where NewKeeper is instantiated in app.go.
rg -C 3 'NewKeeper' protocol/app/app.go

Length of output: 6208



Script:

#!/bin/bash
# Display the full instantiation line of the RatelimitKeeper in app.go.
rg -C 10 'app.RatelimitKeeper = \*ratelimitmodulekeeper.NewKeeper' protocol/app/app.go

Length of output: 759

* 149-154: The `SetLimitParams` function now uses the `blockTimeKeeper` and `currentTvl` parameters. Ensure that the logic for initializing the capacity list with the current baseline is correct and that the `currentTvl` is being used appropriately.
  • 219-234: The UpdateAllCapacitiesEndBlocker function has been added to update the capacity for all limit params during the EndBlocker. Ensure that this function is called appropriately in the EndBlock routine and that it handles errors and edge cases correctly.
Verification successful

I will correct the syntax error in the ast-grep pattern and re-run the script to verify if the UpdateAllCapacitiesEndBlocker function is called within the EndBlock routine.


I will revise the script to use a single pattern that correctly matches the EndBlock function and checks for the call to UpdateAllCapacitiesEndBlocker.


I will now use the rg command to search for the string UpdateAllCapacitiesEndBlocker to confirm its presence and usage within the EndBlock function.


The search results confirm that the UpdateAllCapacitiesEndBlocker function is indeed called within the EndBlock routine, as indicated by its presence in the protocol/x/ratelimit/module.go file. Additionally, there are test functions in capacity_test.go and keeper_test.go that reference this function, which suggests that it is also covered by unit tests.

Based on this information, the review comment has been verified, and the UpdateAllCapacitiesEndBlocker function is appropriately called in the EndBlock routine.

Scripts Executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Verify that UpdateAllCapacitiesEndBlocker is called in the EndBlock routine.
ast-grep --lang go --pattern $'EndBlock($_) { $$$ UpdateAllCapacitiesEndBlocker($_) $$$ }'

Length of output: 572



Script:

#!/bin/bash
# Correct the pattern syntax and verify that UpdateAllCapacitiesEndBlocker is called in the EndBlock routine.
ast-grep --lang go --pattern $'func($_) { $$$ UpdateAllCapacitiesEndBlocker($$$) $$$ }' --pattern 'func EndBlock'

Length of output: 281



Script:

#!/bin/bash
# Verify that UpdateAllCapacitiesEndBlocker is called in the EndBlock routine using a single pattern.
ast-grep --lang go --pattern $'func($_) { $$$ UpdateAllCapacitiesEndBlocker($$$) $$$ }'

Length of output: 88



Script:

#!/bin/bash
# Search for the string UpdateAllCapacitiesEndBlocker to confirm its usage in the EndBlock function.
rg "UpdateAllCapacitiesEndBlocker"

Length of output: 654

* 236-283: The `updateCapacityForLimitParams` function has been added to calculate the current baseline for a denom and recover some amount of capacity towards the baseline. Ensure that the assumptions made within this function are documented and that the error handling is sufficient.
Verification successful

To verify the documentation and error handling within the updateCapacityForLimitParams function, I will generate a script to display the comments within the function and to examine the error handling code.


To properly review the documentation and error handling within the updateCapacityForLimitParams function, I will generate a corrected script to display the comments and the function's code.


The output from the ast-grep and rg commands provides the necessary context to verify the documentation and error handling within the updateCapacityForLimitParams function.

From the output, we can observe the following:

  1. The function updateCapacityForLimitParams includes a comment that states it "calculates current baseline for a denom and recovers some amount of capacity towards baseline" and that it "Assumes that the LimitParams exist in state." This comment documents the assumption made within the function.

  2. The error handling within the function is done through logging errors instead of panicking. This is seen in two places:

    • If the length of the capacityList does not match the length of limitParams.Limiters, an error is logged, and the function returns without performing further actions.
    • If timeSinceLastBlock is less than 0, which violates the invariant that the current block time should be greater than the previous block time, an error is logged, and the function returns.

Based on the provided context, the assumptions within the updateCapacityForLimitParams function are documented, and the error handling is done through logging without causing a panic in the EndBlocker. This approach is consistent with the intent to avoid disrupting the blockchain's block processing.

Scripts Executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Verify that the assumptions within updateCapacityForLimitParams are documented and that error handling is sufficient.
echo "Review the documentation and error handling within the updateCapacityForLimitParams function."

Length of output: 196



Script:

#!/bin/bash
# Display comments within the updateCapacityForLimitParams function to check for documentation of assumptions.
ast-grep --lang go --pattern $'func (k Keeper) updateCapacityForLimitParams($_, $_) {
  $$$
}' --source-file protocol/x/ratelimit/keeper/keeper.go

# Check for error handling within the updateCapacityForLimitParams function.
rg "updateCapacityForLimitParams" protocol/x/ratelimit/keeper/keeper.go

Length of output: 640



Script:

#!/bin/bash
# Display comments within the updateCapacityForLimitParams function to check for documentation of assumptions.
ast-grep --lang go --pattern $'func (k Keeper) updateCapacityForLimitParams($_, $_) {
  $$$
}'

# Check for error handling within the updateCapacityForLimitParams function.
rg -C 10 "updateCapacityForLimitParams" protocol/x/ratelimit/keeper/keeper.go

Length of output: 4502

* 206-216: Telemetry code has been added to emit metrics for the new capacity list. Ensure that the telemetry is set up correctly and that it does not leak any sensitive information.
Verification successful

To verify the correct setup of the telemetry code and ensure it does not leak sensitive information, I will need to:

  1. Review the implementation of the telemetry code to confirm it adheres to best practices and the expected standards set by the Cosmos SDK, which is commonly used in blockchain protocols.
  2. Check the data being emitted by the telemetry code to ensure it does not include sensitive information such as private keys, user identities, or any other data that should remain confidential.

I will generate a script to search for the usage of the telemetry code within the keeper.go file and review the related code to ensure it is set up correctly and does not leak sensitive information.


To further verify the correct setup of the telemetry code and ensure it does not leak sensitive information, I will generate a script to display the entire context around the usage of the telemetry.SetGaugeWithLabels function within the keeper.go file. This will allow me to review the data being passed to the telemetry system.


The telemetry code in keeper.go is set up to emit metrics for each capacity in the denomCapacity.CapacityList. It uses the telemetry.SetGaugeWithLabels function to set a gauge with labels that include non-sensitive information such as the rate limit denomination and the limiter index. The metric value is derived from the BigInt representation of the capacity, which is a numerical value and does not constitute sensitive information.

Based on the context provided, the telemetry setup appears to be correct and does not leak sensitive information.

Scripts Executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Verify that telemetry setup is correct and does not leak sensitive information.
echo "Review the telemetry code to ensure correct setup and no sensitive information leakage."

Length of output: 184



Script:

#!/bin/bash
# Search for the usage of telemetry in keeper.go and output the relevant code sections.
rg "telemetry" protocol/x/ratelimit/keeper/keeper.go

Length of output: 172



Script:

#!/bin/bash
# Display the context around the usage of telemetry.SetGaugeWithLabels in keeper.go
rg -C 10 "telemetry.SetGaugeWithLabels" protocol/x/ratelimit/keeper/keeper.go

Length of output: 614

protocol/x/ratelimit/util/capacity_test.go (1)
  • 15-269: The TestUpdateAllCapacitiesEndBlocker function has been added to test the new capacity update logic. Ensure that the test cases cover a wide range of scenarios, including edge cases, and that the assertions are correct.
Verification successful

To verify the test cases in TestUpdateAllCapacitiesEndBlocker, we need to analyze the code to ensure that it covers a wide range of scenarios. Since the output from the shell script does not provide this information, we will need to generate a new script to check the test cases programmatically or continue with a manual review based on the code provided.

Let's proceed with generating a script to check the presence of various test scenarios within the TestUpdateAllCapacitiesEndBlocker function.


To fully verify the test scenarios in TestUpdateAllCapacitiesEndBlocker, we need to see the complete set of test cases. The output provided is truncated, so we will generate a new script to retrieve the entire test function.


The output provided is still truncated and does not show the complete set of test scenarios within the TestUpdateAllCapacitiesEndBlocker function. To ensure we have a comprehensive view of the test coverage, we need to retrieve more lines of the test function. Let's generate a new script to do so.


The output provided is still truncated and does not show the complete set of test scenarios within the TestUpdateAllCapacitiesEndBlocker function. However, the scenarios that are visible indicate a variety of test conditions, including different relationships between previous capacity and baseline, different decimal precisions, and different time intervals since the last block.

To ensure we have a comprehensive view of the test coverage, we need to retrieve more lines of the test function. Let's generate a new script to do so.


The output now includes the full TestUpdateAllCapacitiesEndBlocker function with all its test cases. The test cases cover various scenarios, including:

  • "Prev capacity equals baseline"
  • "Prev capacity < baseline"
  • "prev capacity < baseline, 18 decimals"
  • "Prev capacity = 0"
  • "Prev capacity = 0, capacity_diff rounds down"
  • "Prev capacity = 2 * baseline, capacity_diff rounds down"
  • "baseline < prev capacity < 2 * baseline"
  • "prev capacity > 2 * baseline + capacity < baseline"

These scenarios test different conditions related to the previous capacity in relation to the baseline, different decimal precisions, and different time intervals since the last block. The variety of test cases suggests comprehensive coverage of the functionality being tested.

Based on the provided test scenarios, it appears that the test function TestUpdateAllCapacitiesEndBlocker has been thoroughly implemented to cover a wide range of cases, which is crucial for ensuring the reliability of the capacity update logic in the x/ratelimit module.

Scripts Executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Verify that the test cases in TestUpdateAllCapacitiesEndBlocker cover a wide range of scenarios.
echo "Review the test cases in TestUpdateAllCapacitiesEndBlocker to ensure comprehensive coverage."

Length of output: 194



Script:

#!/bin/bash
# Check for the presence of different test scenarios in TestUpdateAllCapacitiesEndBlocker
rg "func TestUpdateAllCapacitiesEndBlocker" protocol/x/ratelimit/util/capacity_test.go -A 30

Length of output: 1276



Script:

#!/bin/bash
# Retrieve the full test function TestUpdateAllCapacitiesEndBlocker to verify all test scenarios
rg "func TestUpdateAllCapacitiesEndBlocker" protocol/x/ratelimit/util/capacity_test.go -A 100

Length of output: 3855



Script:

#!/bin/bash
# Retrieve the full test function TestUpdateAllCapacitiesEndBlocker to verify all test scenarios, ensuring we capture all test cases
rg "func TestUpdateAllCapacitiesEndBlocker" protocol/x/ratelimit/util/capacity_test.go -A 200

Length of output: 7613



Script:

#!/bin/bash
# Retrieve the full test function TestUpdateAllCapacitiesEndBlocker to verify all test scenarios, ensuring we capture all test cases
rg "func TestUpdateAllCapacitiesEndBlocker" protocol/x/ratelimit/util/capacity_test.go -A 300

Length of output: 9361

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Status

Actionable comments generated: 0

Configuration used: CodeRabbit UI

Commits Files that changed from the base of the PR and between 9831552 and 879141b.
Files selected for processing (2)
  • protocol/x/ratelimit/keeper/keeper.go (5 hunks)
  • protocol/x/ratelimit/util/capacity.go (1 hunks)
Files skipped from review as they are similar to previous changes (2)
  • protocol/x/ratelimit/keeper/keeper.go
  • protocol/x/ratelimit/util/capacity.go

protocol/x/ratelimit/util/capacity.go Outdated Show resolved Hide resolved
baseline := GetBaseline(bigTvl, limiter)

capacityMinusBaseline := new(big.Int).Sub(
prevCapapcityList[i].BigInt(), // array access is safe because of input invariant
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the input invariant is not checked in this function which I think is a bit of an antipattern

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would it make more sense to just have the length of newCapacityList to equal the length of the limitParams.Limiters? In the case that the prevCapacityList does not match in length, then ignore the old values (i.e. set all new values to the baseline of the limiter)

Copy link
Contributor Author

@teddyding teddyding Jan 11, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At second thought, I believe this invariant is expected to hold in UpdateCapacityEndBlocker, because SetLimitParams also atomically overwrite the DenomCapacity list.

So I move the invariant check from caller function to this function, and returns the error if invariant failed.

Comment on lines 55 to 58
periodMilli := new(big.Int).Mul(
new(big.Int).SetUint64(uint64(limiter.PeriodSec)),
big.NewInt(1000),
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wait why are we even using PeriodSec which appears to be an integer? Why not just use a type that converts to time.Duration like we do elsewhere in protos?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point, period should be in time.Duration. Updated proto and implementation accordingly.

protocol/x/ratelimit/util/capacity.go Outdated Show resolved Hide resolved
Comment on lines 66 to 73
capacityDiff := new(big.Rat).Mul(
operandL,
operandR,
)

bigRatcapacityMinusBaseline := new(big.Rat).SetInt(capacityMinusBaseline)

if new(big.Rat).Abs(bigRatcapacityMinusBaseline).Cmp(capacityDiff) < 0 {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
capacityDiff := new(big.Rat).Mul(
operandL,
operandR,
)
bigRatcapacityMinusBaseline := new(big.Rat).SetInt(capacityMinusBaseline)
if new(big.Rat).Abs(bigRatcapacityMinusBaseline).Cmp(capacityDiff) < 0 {
capacityDiffRat := new(big.Rat).Mul(operandL, operandR)
capacityDiff := lib.BigRatRound(capacityDiffRat, false) // rounds down
if new(big.Int).Abs(capacityMinusBaseline).Cmp(capacityDiff) <= 0 {

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also remove the lib.BigRatRounds below

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

@@ -259,26 +259,25 @@ func (k Keeper) updateCapacityForLimitParams(
tvl := k.bankKeeper.GetSupply(ctx, limitParams.Denom)

capacityList := k.GetDenomCapacity(ctx, limitParams.Denom).CapacityList
if len(capacityList) != len(limitParams.Limiters) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moved this check into CalculateNewCapacityList which throws an error

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Status

Actionable comments generated: 2

Configuration used: CodeRabbit UI

Commits Files that changed from the base of the PR and between 879141b and b13199a.
Files ignored due to filter (2)
  • protocol/x/ratelimit/types/limit_params.pb.go
  • protocol/x/ratelimit/types/query.pb.go
Files selected for processing (10)
  • indexer/packages/v4-protos/src/codegen/dydxprotocol/ratelimit/limit_params.ts (6 hunks)
  • indexer/packages/v4-protos/src/codegen/dydxprotocol/ratelimit/query.ts (5 hunks)
  • proto/dydxprotocol/ratelimit/limit_params.proto (2 hunks)
  • proto/dydxprotocol/ratelimit/query.proto (2 hunks)
  • protocol/x/ratelimit/keeper/keeper.go (5 hunks)
  • protocol/x/ratelimit/keeper/keeper_test.go (7 hunks)
  • protocol/x/ratelimit/types/errors.go (1 hunks)
  • protocol/x/ratelimit/util/baseline_test.go (1 hunks)
  • protocol/x/ratelimit/util/capacity.go (1 hunks)
  • protocol/x/ratelimit/util/capacity_test.go (1 hunks)
Additional comments: 33
protocol/x/ratelimit/types/errors.go (1)
  • 19-23: The addition of ErrMismatchedCapacityLimitersLength with code 1003 is correctly implemented and follows the existing pattern for error registration in the module.
proto/dydxprotocol/ratelimit/limit_params.proto (2)
  • 5-5: The import of "google/protobuf/duration.proto" is correctly added to support the new Duration type for the period field.

  • 21-26: The change from period_sec to period of type google.protobuf.Duration is correctly implemented and aligns with the PR objectives to use a more appropriate type for time duration.

proto/dydxprotocol/ratelimit/query.proto (2)
  • 7-7: The import of "google/protobuf/duration.proto" is correctly added to support the new Duration type for the period field in the CapacityResult message.

  • 40-41: The change from period_sec to period of type google.protobuf.Duration in the CapacityResult message is correctly implemented and aligns with the PR objectives to use a more appropriate type for time duration.

protocol/x/ratelimit/util/baseline_test.go (1)
  • 23-26: The test cases in TestGetBaseline have been updated to use the Duration type instead of an integer for the Period field. This change is consistent with the updates to the Limiter struct.
protocol/x/ratelimit/util/capacity.go (2)
  • 41-50: The error handling for mismatched lengths between limitParams.Limiters and prevCapacityList is correctly implemented, using errorsmod.Wrapf to provide a detailed error message.

  • 52-99: The logic within CalculateNewCapacityList for updating the capacity list based on the time since the last block and the limit parameters is complex but appears to be correctly implemented. The use of big integers and rationals is appropriate for the precision required in financial calculations.

indexer/packages/v4-protos/src/codegen/dydxprotocol/ratelimit/limit_params.ts (6)
  • 1-1: The import of Duration and DurationSDKType from the google/protobuf/duration module is correctly added to support the new Duration type for the period field.

  • 38-41: The replacement of the periodSec property with period of type Duration in the Limiter interface is correctly implemented and aligns with the protobuf changes.

  • 138-138: In the createBaseLimiter function, initializing the period as undefined is correct and aligns with the changes made to the Limiter interface.

  • 146-147: The conditional encoding of the period field using Duration.encode is correctly implemented and checks for undefined before attempting to encode.

  • 170-171: The decoding logic for the period field using Duration.decode is correctly implemented in the decode function of the Limiter object.

  • 193-193: The fromPartial function of the Limiter object correctly handles the assignment of the period field using Duration.fromPartial.

indexer/packages/v4-protos/src/codegen/dydxprotocol/ratelimit/query.ts (6)
  • 2-2: The import of Duration and DurationSDKType from the google/protobuf/duration module is correctly added to support the new Duration type for the period field in the CapacityResult interface.

  • 50-50: The change of the periodSec property to period in the CapacityResult interface is correctly implemented and aligns with the protobuf changes.

  • 210-210: In the createBaseCapacityResult function, initializing the period as undefined is correct and aligns with the changes made to the CapacityResult interface.

  • 217-218: The conditional encoding of the period field using Duration.encode is correctly implemented and checks for undefined before attempting to encode.

  • 238-238: The decoding logic for the period field using Duration.decode is correctly implemented in the decode function of the CapacityResult object.

  • 256-256: The fromPartial function of the CapacityResult object correctly handles the assignment of the period field using Duration.fromPartial.

protocol/x/ratelimit/keeper/keeper.go (7)
  • 25-28: The addition of the blockTimeKeeper field to the Keeper struct is correctly implemented and aligns with the PR objectives to track block time within the rate limiting logic.

  • 41-49: The NewKeeper function has been correctly updated to include the blockTimeKeeper parameter and initialize the Keeper struct with it.

  • 150-155: The use of ratelimitutil.GetBaseline in the SetLimitParams function is correct and reflects the changes made to the Limiter struct and associated utility functions.

  • 207-217: The telemetry emission for the new capacity list in the SetDenomCapacity function is correctly implemented, using appropriate labels and metrics.

  • 220-249: The UpdateAllCapacitiesEndBlocker function is correctly implemented to update the capacity for all limit params at the end of each block, aligning with the PR objectives.

  • 251-284: The updateCapacityForLimitParams function is correctly implemented, assuming that the LimitParams exist in state and updating the capacity list based on the time since the last block.

  • 256-256: The previous comment chain indicates that the function updateCapacityForLimitParams was too large and has been broken down according to a suggestion. Since the function is still quite large, it would be beneficial to verify that the refactoring has been completed as intended.

protocol/x/ratelimit/util/capacity_test.go (3)
  • 26-285: The calculations for the expected capacity list in the test cases appear to be correct and align with the described logic for rate limiting. Constants and calculations are used appropriately to simulate the expected behavior of the UpdateAllCapacitiesEndBlocker function. The use of big_testutil for handling big integers with high precision is noted and seems appropriate for the context of blockchain and token calculations.

  • 254-285: The error handling test case correctly anticipates a mismatch in the lengths of the capacity list and the limiters. The use of errorsmod.Wrapf to provide detailed error messages is a good practice, as it aids in debugging. Ensure that the error message is consistent with the actual error condition and provides enough context for troubleshooting.

  • 290-310: The use of require assertions is appropriate for these test cases. It ensures that the test will stop if a condition is not met, which is suitable for conditions that must be true for the test to proceed meaningfully. The assertions are used correctly to compare the expected and actual capacity lists and to check for errors when expected.

protocol/x/ratelimit/keeper/keeper_test.go (3)
  • 23-23: The addition of the constant testDenom2 is noted and seems to be in line with the existing pattern of constants for test denominations.

  • 78-83: The TestSetGetLimitParams_Success function has been modified to use time.Duration instead of seconds for the Period field in types.Limiter. This is a positive change for clarity and type safety.

  • 273-797: > Note: This review was outside the patches, so it was mapped to the patch with the greatest overlap. Original lines [276-845]

The TestUpdateAllCapacitiesEndBlocker function has been significantly modified to handle time.Time instead of int64 for block times and to use time.Duration for periods. The test cases have been updated accordingly, and new test cases have been added to cover various scenarios related to capacity updates. It's important to ensure that these changes are consistent with the new logic for updating capacities at the end of each block. The use of time.Time and time.Duration is a good practice for time-related operations, providing type safety and clarity.

However, there are a couple of test cases labeled as "(Error)" which seem to simulate error conditions, such as when the current block time is equal to the previous block time, and when the lengths of limiters and capacity lists do not match. It's crucial to ensure that these error conditions are handled gracefully in the actual implementation and that appropriate error handling or logging is in place.

Additionally, the test cases make use of precise calculations for expected values (e.g., lines 383-384, 447-450). It's important to verify that these calculations are correct and that they accurately reflect the intended logic of the capacity update mechanism.

Lastly, the test cases should be reviewed to ensure that they cover all relevant edge cases and that the assertions are correctly validating the expected outcomes.

Comment on lines 16 to 311
},
},
},
timeSinceLastBlock: time.Second + 150*time.Millisecond, // 1.15 second
prevCapapcityList: []dtypes.SerializableInt{
dtypes.NewInt(329_000_000_000),
dtypes.NewInt(3_500_000_000_000),
},
expectedCapacityList: []dtypes.SerializableInt{
dtypes.NewInt(328_936_111_112), // recovered by 1.15/3600 * 200k ~= 63.89
dtypes.NewInt(3_499_973_379_630), // recovered by 1.15/86400 * 2M ~= 26.62
},
},
"prev capacity > 2 * baseline + capacity < baseline": {
bigTvl: big.NewInt(20_000_000_000_000), // 20M token (assuming 6 decimals)
limitParams: types.LimitParams{
Denom: testDenom,
Limiters: []types.Limiter{
// baseline = 200k tokens
{
Period: 3_600 * time.Second,
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals)
BaselineTvlPpm: 10_000, // 1%
},
// baseline = 2M tokens
{
Period: 86_400 * time.Second,
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals)
BaselineTvlPpm: 100_000, // 10%
},
},
},
timeSinceLastBlock: time.Second + 150*time.Millisecond, // 1.15 second
prevCapapcityList: []dtypes.SerializableInt{
dtypes.NewInt(629_000_000_000), // > 2 * baseline
dtypes.NewInt(1_200_000_000_000), // < baseline
},
expectedCapacityList: []dtypes.SerializableInt{
dtypes.NewInt(628_862_958_334), // recovered by 1.15/3600 * (629k - 200k) ~= 137.04
dtypes.NewInt(1_200_026_620_370), // recovered by 1.15/86400 * 2M ~= 26.62
},
},
"Error: len(capacityList) != len(limiters)": {
bigTvl: big.NewInt(25_000_000_000_000), // 25M token (assuming 6 decimals)
limitParams: types.LimitParams{
Denom: testDenom,
Limiters: []types.Limiter{
// baseline = 25M * 1% = 250k tokens
{
Period: 3_600 * time.Second,
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals)
BaselineTvlPpm: 10_000, // 1%
},
// baseline = 25M * 10% = 2.5M tokens
{
Period: 86_400 * time.Second,
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals)
BaselineTvlPpm: 100_000, // 10%
},
},
},
timeSinceLastBlock: time.Second + 90*time.Millisecond, // 1.09 second
prevCapapcityList: []dtypes.SerializableInt{
dtypes.NewInt(99_000_000_000),
dtypes.NewInt(990_000_000_000),
dtypes.NewInt(0),
},
expectedErr: errorsmod.Wrapf(
types.ErrMismatchedCapacityLimitersLength,
"denom = %v, len(limiters) = %v, len(prevCapacityList) = %v",
testDenom,
2,
3,
),
},
}

// Run tests.
for name, tc := range tests {
t.Run(name, func(t *testing.T) {
newCapacityList, err := util.CalculateNewCapacityList(
tc.bigTvl,
tc.limitParams,
tc.prevCapapcityList,
tc.timeSinceLastBlock,
)

if tc.expectedErr != nil {
require.Error(t, tc.expectedErr, err)
return
}

require.NoError(t, err)
require.Equal(t,
tc.expectedCapacityList,
newCapacityList,
)
})
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test function TestUpdateAllCapacitiesEndBlocker is comprehensive and covers a variety of scenarios, including edge cases and error handling. The use of table-driven tests is a good practice for clarity and maintainability. However, there is a typo in the field name prevCapapcityList which should be corrected to prevCapacityList to maintain consistency and avoid potential confusion.

- prevCapapcityList
+ prevCapacityList

Committable suggestion

IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
func TestUpdateAllCapacitiesEndBlocker(t *testing.T) {
testDenom := "testDenom"
tests := map[string]struct {
bigTvl *big.Int
limitParams types.LimitParams
prevCapapcityList []dtypes.SerializableInt
expectedCapacityList []dtypes.SerializableInt
timeSinceLastBlock time.Duration
expectedErr error
}{
"Prev capacity equals baseline": {
bigTvl: big.NewInt(25_000_000_000_000), // 25M token (assuming 6 decimals)
limitParams: types.LimitParams{
Denom: testDenom,
Limiters: []types.Limiter{
// baseline = 25M * 1% = 250k tokens
{
Period: 3_600 * time.Second,
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals)
BaselineTvlPpm: 10_000, // 1%
},
// baseline = 25M * 10% = 2.5M tokens
{
Period: 86_400 * time.Second,
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals)
BaselineTvlPpm: 100_000, // 10%
},
},
},
timeSinceLastBlock: time.Second,
prevCapapcityList: []dtypes.SerializableInt{
dtypes.NewInt(250_000_000_000), // 250k tokens, which equals baseline
dtypes.NewInt(2_500_000_000_000), // 2.5M tokens, which equals baseline
},
expectedCapacityList: []dtypes.SerializableInt{
dtypes.NewInt(250_000_000_000), // 250k tokens
dtypes.NewInt(2_500_000_000_000), // 2.5M tokens
},
},
"Prev capacity < baseline": {
bigTvl: big.NewInt(25_000_000_000_000), // 25M token (assuming 6 decimals)
limitParams: types.LimitParams{
Denom: testDenom,
Limiters: []types.Limiter{
// baseline = 25M * 1% = 250k tokens
{
Period: 3_600 * time.Second,
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals)
BaselineTvlPpm: 10_000, // 1%
},
// baseline = 25M * 10% = 2.5M tokens
{
Period: 86_400 * time.Second,
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals)
BaselineTvlPpm: 100_000, // 10%
},
},
},
timeSinceLastBlock: time.Second + 90*time.Millisecond, // 1.09 second
prevCapapcityList: []dtypes.SerializableInt{
dtypes.NewInt(99_000_000_000), // 99k tokens, < baseline (250k)
dtypes.NewInt(990_000_000_000), // 0.99M tokens, < baseline (2.5M)
},
expectedCapacityList: []dtypes.SerializableInt{
dtypes.NewInt(99_075_694_444), // recovered by 1.09/3600 * 250k = 75.694444 tokens
dtypes.NewInt(990_031_539_351), // recovered by 1.09/86400 * 2.5M = 31.539 tokens
},
},
"prev capacity < baseline, 18 decimals": {
bigTvl: big_testutil.Int64MulPow10(25, 24), // 25M tokens
limitParams: types.LimitParams{
Denom: testDenom,
Limiters: []types.Limiter{
// baseline = 25M * 1% = 250k tokens
{
Period: 3_600 * time.Second,
BaselineMinimum: dtypes.NewIntFromBigInt(
big_testutil.Int64MulPow10(100_000, 18), // 100k tokens(assuming 18 decimals)
),
BaselineTvlPpm: 10_000, // 1%
},
// baseline = 25M * 10% = 2.5M tokens
{
Period: 86_400 * time.Second,
BaselineMinimum: dtypes.NewIntFromBigInt(
big_testutil.Int64MulPow10(1_000_000, 18), // 1M tokens(assuming 18 decimals)
),
BaselineTvlPpm: 100_000, // 10%
},
},
},
timeSinceLastBlock: time.Second,
prevCapapcityList: []dtypes.SerializableInt{
dtypes.NewIntFromBigInt(
big_testutil.Int64MulPow10(99_000, 18),
), // 99k tokens < baseline (250k)
dtypes.NewIntFromBigInt(
big_testutil.Int64MulPow10(990_000, 18),
), // 0.99M tokens, < baseline (2.5M)
},
expectedCapacityList: []dtypes.SerializableInt{
dtypes.NewIntFromBigInt(
big_testutil.MustFirst(new(big.Int).SetString("99069444444444444444444", 10)),
), // recovered by 1/3600 * 250k ~= 69.4444 tokens
dtypes.NewIntFromBigInt(
big_testutil.MustFirst(new(big.Int).SetString("990028935185185185185185", 10)),
), // recovered by 1/86400 * 2.5M ~= 28.9351 tokens
},
},
"Prev capacity = 0": {
bigTvl: big.NewInt(1_000_000_000_000), // 1M token (assuming 6 decimals)
limitParams: types.LimitParams{
Denom: testDenom,
Limiters: []types.Limiter{
// baseline = baseline minimum = 100k tokens
{
Period: 3_600 * time.Second,
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals)
BaselineTvlPpm: 10_000, // 1%
},
// baseline = baseline minimum = 1M tokens
{
Period: 86_400 * time.Second,
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals)
BaselineTvlPpm: 100_000, // 10%
},
},
},
timeSinceLastBlock: time.Second + 150*time.Millisecond, // 1.15 second
prevCapapcityList: []dtypes.SerializableInt{
dtypes.NewInt(0),
dtypes.NewInt(0),
},
expectedCapacityList: []dtypes.SerializableInt{
dtypes.NewInt(31_944_444), // recovered by 1.15/3600 * 100k ~= 31.94
dtypes.NewInt(13_310_185), // recovered by 1.15/86400 * 1M ~= 13.31
},
},
"Prev capacity = 0, capacity_diff rounds down": {
bigTvl: big.NewInt(1_000_000_000_000), // 1M token (assuming 6 decimals)
limitParams: types.LimitParams{
Denom: testDenom,
Limiters: []types.Limiter{
// baseline = baseline minimum = 100k tokens
{
Period: 3_600 * time.Second,
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals)
BaselineTvlPpm: 10_000, // 1%
},
},
},
timeSinceLastBlock: 12 * time.Second, // 12 second
prevCapapcityList: []dtypes.SerializableInt{
dtypes.NewInt(0),
},
expectedCapacityList: []dtypes.SerializableInt{
dtypes.NewInt(333_333_333), // recovered by 12/3600 * 100k ~= 333.333
},
},
"Prev capacity = 2 * baseline, capacity_diff rounds down": {
bigTvl: big.NewInt(1_000_000_000_000), // 1M token (assuming 6 decimals)
limitParams: types.LimitParams{
Denom: testDenom,
Limiters: []types.Limiter{
// baseline = baseline minimum = 100k tokens
{
Period: 3_600 * time.Second,
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals)
BaselineTvlPpm: 10_000, // 1%
},
},
},
timeSinceLastBlock: 12 * time.Second, // 12 second
prevCapapcityList: []dtypes.SerializableInt{
dtypes.NewInt(200_000_000_000),
},
expectedCapacityList: []dtypes.SerializableInt{
dtypes.NewInt(199_666_666_667), // recovered by 12/3600 * 100k ~= 333.333
},
},
"baseline < prev capacity < 2 * baseline": {
bigTvl: big.NewInt(20_000_000_000_000), // 20M token (assuming 6 decimals)
limitParams: types.LimitParams{
Denom: testDenom,
Limiters: []types.Limiter{
// baseline = 200k tokens
{
Period: 3_600 * time.Second,
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals)
BaselineTvlPpm: 10_000, // 1%
},
// baseline = 2M tokens
{
Period: 86_400 * time.Second,
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals)
BaselineTvlPpm: 100_000, // 10%
},
},
},
timeSinceLastBlock: time.Second + 150*time.Millisecond, // 1.15 second
prevCapapcityList: []dtypes.SerializableInt{
dtypes.NewInt(329_000_000_000),
dtypes.NewInt(3_500_000_000_000),
},
expectedCapacityList: []dtypes.SerializableInt{
dtypes.NewInt(328_936_111_112), // recovered by 1.15/3600 * 200k ~= 63.89
dtypes.NewInt(3_499_973_379_630), // recovered by 1.15/86400 * 2M ~= 26.62
},
},
"prev capacity > 2 * baseline + capacity < baseline": {
bigTvl: big.NewInt(20_000_000_000_000), // 20M token (assuming 6 decimals)
limitParams: types.LimitParams{
Denom: testDenom,
Limiters: []types.Limiter{
// baseline = 200k tokens
{
Period: 3_600 * time.Second,
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals)
BaselineTvlPpm: 10_000, // 1%
},
// baseline = 2M tokens
{
Period: 86_400 * time.Second,
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals)
BaselineTvlPpm: 100_000, // 10%
},
},
},
timeSinceLastBlock: time.Second + 150*time.Millisecond, // 1.15 second
prevCapapcityList: []dtypes.SerializableInt{
dtypes.NewInt(629_000_000_000), // > 2 * baseline
dtypes.NewInt(1_200_000_000_000), // < baseline
},
expectedCapacityList: []dtypes.SerializableInt{
dtypes.NewInt(628_862_958_334), // recovered by 1.15/3600 * (629k - 200k) ~= 137.04
dtypes.NewInt(1_200_026_620_370), // recovered by 1.15/86400 * 2M ~= 26.62
},
},
"Error: len(capacityList) != len(limiters)": {
bigTvl: big.NewInt(25_000_000_000_000), // 25M token (assuming 6 decimals)
limitParams: types.LimitParams{
Denom: testDenom,
Limiters: []types.Limiter{
// baseline = 25M * 1% = 250k tokens
{
Period: 3_600 * time.Second,
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals)
BaselineTvlPpm: 10_000, // 1%
},
// baseline = 25M * 10% = 2.5M tokens
{
Period: 86_400 * time.Second,
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals)
BaselineTvlPpm: 100_000, // 10%
},
},
},
timeSinceLastBlock: time.Second + 90*time.Millisecond, // 1.09 second
prevCapapcityList: []dtypes.SerializableInt{
dtypes.NewInt(99_000_000_000),
dtypes.NewInt(990_000_000_000),
dtypes.NewInt(0),
},
expectedErr: errorsmod.Wrapf(
types.ErrMismatchedCapacityLimitersLength,
"denom = %v, len(limiters) = %v, len(prevCapacityList) = %v",
testDenom,
2,
3,
),
},
}
// Run tests.
for name, tc := range tests {
t.Run(name, func(t *testing.T) {
newCapacityList, err := util.CalculateNewCapacityList(
tc.bigTvl,
tc.limitParams,
tc.prevCapapcityList,
tc.timeSinceLastBlock,
)
if tc.expectedErr != nil {
require.Error(t, tc.expectedErr, err)
return
}
require.NoError(t, err)
require.Equal(t,
tc.expectedCapacityList,
newCapacityList,
)
})
}
}
func TestUpdateAllCapacitiesEndBlocker(t *testing.T) {
testDenom := "testDenom"
tests := map[string]struct {
bigTvl *big.Int
limitParams types.LimitParams
prevCapacityList []dtypes.SerializableInt
expectedCapacityList []dtypes.SerializableInt
timeSinceLastBlock time.Duration
expectedErr error
}{
"Prev capacity equals baseline": {
bigTvl: big.NewInt(25_000_000_000_000), // 25M token (assuming 6 decimals)
limitParams: types.LimitParams{
Denom: testDenom,
Limiters: []types.Limiter{
// baseline = 25M * 1% = 250k tokens
{
Period: 3_600 * time.Second,
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals)
BaselineTvlPpm: 10_000, // 1%
},
// baseline = 25M * 10% = 2.5M tokens
{
Period: 86_400 * time.Second,
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals)
BaselineTvlPpm: 100_000, // 10%
},
},
},
timeSinceLastBlock: time.Second,
prevCapacityList: []dtypes.SerializableInt{
dtypes.NewInt(250_000_000_000), // 250k tokens, which equals baseline
dtypes.NewInt(2_500_000_000_000), // 2.5M tokens, which equals baseline
},
expectedCapacityList: []dtypes.SerializableInt{
dtypes.NewInt(250_000_000_000), // 250k tokens
dtypes.NewInt(2_500_000_000_000), // 2.5M tokens
},
},
"Prev capacity < baseline": {
bigTvl: big.NewInt(25_000_000_000_000), // 25M token (assuming 6 decimals)
limitParams: types.LimitParams{
Denom: testDenom,
Limiters: []types.Limiter{
// baseline = 25M * 1% = 250k tokens
{
Period: 3_600 * time.Second,
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals)
BaselineTvlPpm: 10_000, // 1%
},
// baseline = 25M * 10% = 2.5M tokens
{
Period: 86_400 * time.Second,
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals)
BaselineTvlPpm: 100_000, // 10%
},
},
},
timeSinceLastBlock: time.Second + 90*time.Millisecond, // 1.09 second
prevCapacityList: []dtypes.SerializableInt{
dtypes.NewInt(99_000_000_000), // 99k tokens, < baseline (250k)
dtypes.NewInt(990_000_000_000), // 0.99M tokens, < baseline (2.5M)
},
expectedCapacityList: []dtypes.SerializableInt{
dtypes.NewInt(99_075_694_444), // recovered by 1.09/3600 * 250k = 75.694444 tokens
dtypes.NewInt(990_031_539_351), // recovered by 1.09/86400 * 2.5M = 31.539 tokens
},
},
"prev capacity < baseline, 18 decimals": {
bigTvl: big_testutil.Int64MulPow10(25, 24), // 25M tokens
limitParams: types.LimitParams{
Denom: testDenom,
Limiters: []types.Limiter{
// baseline = 25M * 1% = 250k tokens
{
Period: 3_600 * time.Second,
BaselineMinimum: dtypes.NewIntFromBigInt(
big_testutil.Int64MulPow10(100_000, 18), // 100k tokens(assuming 18 decimals)
),
BaselineTvlPpm: 10_000, // 1%
},
// baseline = 25M * 10% = 2.5M tokens
{
Period: 86_400 * time.Second,
BaselineMinimum: dtypes.NewIntFromBigInt(
big_testutil.Int64MulPow10(1_000_000, 18), // 1M tokens(assuming 18 decimals)
),
BaselineTvlPpm: 100_000, // 10%
},
},
},
timeSinceLastBlock: time.Second,
prevCapacityList: []dtypes.SerializableInt{
dtypes.NewIntFromBigInt(
big_testutil.Int64MulPow10(99_000, 18),
), // 99k tokens < baseline (250k)
dtypes.NewIntFromBigInt(
big_testutil.Int64MulPow10(990_000, 18),
), // 0.99M tokens, < baseline (2.5M)
},
expectedCapacityList: []dtypes.SerializableInt{
dtypes.NewIntFromBigInt(
big_testutil.MustFirst(new(big.Int).SetString("99069444444444444444444", 10)),
), // recovered by 1/3600 * 250k ~= 69.4444 tokens
dtypes.NewIntFromBigInt(
big_testutil.MustFirst(new(big.Int).SetString("990028935185185185185185", 10)),
), // recovered by 1/86400 * 2.5M ~= 28.9351 tokens
},
},
"Prev capacity = 0": {
bigTvl: big.NewInt(1_000_000_000_000), // 1M token (assuming 6 decimals)
limitParams: types.LimitParams{
Denom: testDenom,
Limiters: []types.Limiter{
// baseline = baseline minimum = 100k tokens
{
Period: 3_600 * time.Second,
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals)
BaselineTvlPpm: 10_000, // 1%
},
// baseline = baseline minimum = 1M tokens
{
Period: 86_400 * time.Second,
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals)
BaselineTvlPpm: 100_000, // 10%
},
},
},
timeSinceLastBlock: time.Second + 150*time.Millisecond, // 1.15 second
prevCapacityList: []dtypes.SerializableInt{
dtypes.NewInt(0),
dtypes.NewInt(0),
},
expectedCapacityList: []dtypes.SerializableInt{
dtypes.NewInt(31_944_444), // recovered by 1.15/3600 * 100k ~= 31.94
dtypes.NewInt(13_310_185), // recovered by 1.15/86400 * 1M ~= 13.31
},
},
"Prev capacity = 0, capacity_diff rounds down": {
bigTvl: big.NewInt(1_000_000_000_000), // 1M token (assuming 6 decimals)
limitParams: types.LimitParams{
Denom: testDenom,
Limiters: []types.Limiter{
// baseline = baseline minimum = 100k tokens
{
Period: 3_600 * time.Second,
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals)
BaselineTvlPpm: 10_000, // 1%
},
},
},
timeSinceLastBlock: 12 * time.Second, // 12 second
prevCapacityList: []dtypes.SerializableInt{
dtypes.NewInt(0),
},
expectedCapacityList: []dtypes.SerializableInt{
dtypes.NewInt(333_333_333), // recovered by 12/3600 * 100k ~= 333.333
},
},
"Prev capacity = 2 * baseline, capacity_diff rounds down": {
bigTvl: big.NewInt(1_000_000_000_000), // 1M token (assuming 6 decimals)
limitParams: types.LimitParams{
Denom: testDenom,
Limiters: []types.Limiter{
// baseline = baseline minimum = 100k tokens
{
Period: 3_600 * time.Second,
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals)
BaselineTvlPpm: 10_000, // 1%
},
},
},
timeSinceLastBlock: 12 * time.Second, // 12 second
prevCapacityList: []dtypes.SerializableInt{
dtypes.NewInt(200_000_000_000),
},
expectedCapacityList: []dtypes.SerializableInt{
dtypes.NewInt(199_666_666_667), // recovered by 12/3600 * 100k ~= 333.333
},
},
"baseline < prev capacity < 2 * baseline": {
bigTvl: big.NewInt(20_000_000_000_000), // 20M token (assuming 6 decimals)
limitParams: types.LimitParams{
Denom: testDenom,
Limiters: []types.Limiter{
// baseline = 200k tokens
{
Period: 3_600 * time.Second,
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals)
BaselineTvlPpm: 10_000, // 1%
},
// baseline = 2M tokens
{
Period: 86_400 * time.Second,
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals)
BaselineTvlPpm: 100_000, // 10%
},
},
},
timeSinceLastBlock: time.Second + 150*time.Millisecond, // 1.15 second
prevCapacityList: []dtypes.SerializableInt{
dtypes.NewInt(329_000_000_000),
dtypes.NewInt(3_500_000_000_000),
},
expectedCapacityList: []dtypes.SerializableInt{
dtypes.NewInt(328_936_111_112), // recovered by 1.15/3600 * 200k ~= 63.89
dtypes.NewInt(3_499_973_379_630), // recovered by 1.15/86400 * 2M ~= 26.62
},
},
"prev capacity > 2 * baseline + capacity < baseline": {
bigTvl: big.NewInt(20_000_000_000_000), // 20M token (assuming 6 decimals)
limitParams: types.LimitParams{
Denom: testDenom,
Limiters: []types.Limiter{
// baseline = 200k tokens
{
Period: 3_600 * time.Second,
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals)
BaselineTvlPpm: 10_000, // 1%
},
// baseline = 2M tokens
{
Period: 86_400 * time.Second,
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals)
BaselineTvlPpm: 100_000, // 10%
},
},
},
timeSinceLastBlock: time.Second + 150*time.Millisecond, // 1.15 second
prevCapacityList: []dtypes.SerializableInt{
dtypes.NewInt(629_000_000_000), // > 2 * baseline
dtypes.NewInt(1_200_000_000_000), // < baseline
},
expectedCapacityList: []dtypes.SerializableInt{
dtypes.NewInt(628_862_958_334), // recovered by 1.15/3600 * (629k - 200k) ~= 137.04
dtypes.NewInt(1_200_026_620_370), // recovered by 1.15/86400 * 2M ~= 26.62
},
},
"Error: len(capacityList) != len(limiters)": {
bigTvl: big.NewInt(25_000_000_000_000), // 25M token (assuming 6 decimals)
limitParams: types.LimitParams{
Denom: testDenom,
Limiters: []types.Limiter{
// baseline = 25M * 1% = 250k tokens
{
Period: 3_600 * time.Second,
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals)
BaselineTvlPpm: 10_000, // 1%
},
// baseline = 25M * 10% = 2.5M tokens
{
Period: 86_400 * time.Second,
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals)
BaselineTvlPpm: 100_000, // 10%
},
},
},
timeSinceLastBlock: time.Second + 90*time.Millisecond, // 1.09 second
prevCapacityList: []dtypes.SerializableInt{
dtypes.NewInt(99_000_000_000),
dtypes.NewInt(990_000_000_000),
dtypes.NewInt(0),
},
expectedErr: errorsmod.Wrapf(
types.ErrMismatchedCapacityLimitersLength,
"denom = %v, len(limiters) = %v, len(prevCapacityList) = %v",
testDenom,
2,
3,
),
},
}
// Run tests.
for name, tc := range tests {
t.Run(name, func(t *testing.T) {
newCapacityList, err := util.CalculateNewCapacityList(
tc.bigTvl,
tc.limitParams,
tc.prevCapacityList,
tc.timeSinceLastBlock,
)
if tc.expectedErr != nil {
require.Error(t, tc.expectedErr, err)
return
}
require.NoError(t, err)
require.Equal(t,
tc.expectedCapacityList,
newCapacityList,
)
})
}
}

Comment on lines 18 to 285
},
},
timeSinceLastBlock: 12 * time.Second, // 12 second
prevCapapcityList: []dtypes.SerializableInt{
dtypes.NewInt(200_000_000_000),
},
expectedCapacityList: []dtypes.SerializableInt{
dtypes.NewInt(199_666_666_667), // recovered by 12/3600 * 100k ~= 333.333
},
},
"baseline < prev capacity < 2 * baseline": {
bigTvl: big.NewInt(20_000_000_000_000), // 20M token (assuming 6 decimals)
limitParams: types.LimitParams{
Denom: testDenom,
Limiters: []types.Limiter{
// baseline = 200k tokens
{
Period: 3_600 * time.Second,
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals)
BaselineTvlPpm: 10_000, // 1%
},
// baseline = 2M tokens
{
Period: 86_400 * time.Second,
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals)
BaselineTvlPpm: 100_000, // 10%
},
},
},
timeSinceLastBlock: time.Second + 150*time.Millisecond, // 1.15 second
prevCapapcityList: []dtypes.SerializableInt{
dtypes.NewInt(329_000_000_000),
dtypes.NewInt(3_500_000_000_000),
},
expectedCapacityList: []dtypes.SerializableInt{
dtypes.NewInt(328_936_111_112), // recovered by 1.15/3600 * 200k ~= 63.89
dtypes.NewInt(3_499_973_379_630), // recovered by 1.15/86400 * 2M ~= 26.62
},
},
"prev capacity > 2 * baseline + capacity < baseline": {
bigTvl: big.NewInt(20_000_000_000_000), // 20M token (assuming 6 decimals)
limitParams: types.LimitParams{
Denom: testDenom,
Limiters: []types.Limiter{
// baseline = 200k tokens
{
Period: 3_600 * time.Second,
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals)
BaselineTvlPpm: 10_000, // 1%
},
// baseline = 2M tokens
{
Period: 86_400 * time.Second,
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals)
BaselineTvlPpm: 100_000, // 10%
},
},
},
timeSinceLastBlock: time.Second + 150*time.Millisecond, // 1.15 second
prevCapapcityList: []dtypes.SerializableInt{
dtypes.NewInt(629_000_000_000), // > 2 * baseline
dtypes.NewInt(1_200_000_000_000), // < baseline
},
expectedCapacityList: []dtypes.SerializableInt{
dtypes.NewInt(628_862_958_334), // recovered by 1.15/3600 * (629k - 200k) ~= 137.04
dtypes.NewInt(1_200_026_620_370), // recovered by 1.15/86400 * 2M ~= 26.62
},
},
"Error: len(capacityList) != len(limiters)": {
bigTvl: big.NewInt(25_000_000_000_000), // 25M token (assuming 6 decimals)
limitParams: types.LimitParams{
Denom: testDenom,
Limiters: []types.Limiter{
// baseline = 25M * 1% = 250k tokens
{
Period: 3_600 * time.Second,
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals)
BaselineTvlPpm: 10_000, // 1%
},
// baseline = 25M * 10% = 2.5M tokens
{
Period: 86_400 * time.Second,
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals)
BaselineTvlPpm: 100_000, // 10%
},
},
},
timeSinceLastBlock: time.Second + 90*time.Millisecond, // 1.09 second
prevCapapcityList: []dtypes.SerializableInt{
dtypes.NewInt(99_000_000_000),
dtypes.NewInt(990_000_000_000),
dtypes.NewInt(0),
},
expectedErr: errorsmod.Wrapf(
types.ErrMismatchedCapacityLimitersLength,
"denom = %v, len(limiters) = %v, len(prevCapacityList) = %v",
testDenom,
2,
3,
),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test cases are well-structured and seem logically correct. However, there is a repeated typo in the field name prevCapapcityList across multiple test cases. This should be corrected to prevCapacityList.

- prevCapapcityList
+ prevCapacityList

Committable suggestion

IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
tests := map[string]struct {
bigTvl *big.Int
limitParams types.LimitParams
prevCapapcityList []dtypes.SerializableInt
expectedCapacityList []dtypes.SerializableInt
timeSinceLastBlock time.Duration
expectedErr error
}{
"Prev capacity equals baseline": {
bigTvl: big.NewInt(25_000_000_000_000), // 25M token (assuming 6 decimals)
limitParams: types.LimitParams{
Denom: testDenom,
Limiters: []types.Limiter{
// baseline = 25M * 1% = 250k tokens
{
Period: 3_600 * time.Second,
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals)
BaselineTvlPpm: 10_000, // 1%
},
// baseline = 25M * 10% = 2.5M tokens
{
Period: 86_400 * time.Second,
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals)
BaselineTvlPpm: 100_000, // 10%
},
},
},
timeSinceLastBlock: time.Second,
prevCapapcityList: []dtypes.SerializableInt{
dtypes.NewInt(250_000_000_000), // 250k tokens, which equals baseline
dtypes.NewInt(2_500_000_000_000), // 2.5M tokens, which equals baseline
},
expectedCapacityList: []dtypes.SerializableInt{
dtypes.NewInt(250_000_000_000), // 250k tokens
dtypes.NewInt(2_500_000_000_000), // 2.5M tokens
},
},
"Prev capacity < baseline": {
bigTvl: big.NewInt(25_000_000_000_000), // 25M token (assuming 6 decimals)
limitParams: types.LimitParams{
Denom: testDenom,
Limiters: []types.Limiter{
// baseline = 25M * 1% = 250k tokens
{
Period: 3_600 * time.Second,
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals)
BaselineTvlPpm: 10_000, // 1%
},
// baseline = 25M * 10% = 2.5M tokens
{
Period: 86_400 * time.Second,
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals)
BaselineTvlPpm: 100_000, // 10%
},
},
},
timeSinceLastBlock: time.Second + 90*time.Millisecond, // 1.09 second
prevCapapcityList: []dtypes.SerializableInt{
dtypes.NewInt(99_000_000_000), // 99k tokens, < baseline (250k)
dtypes.NewInt(990_000_000_000), // 0.99M tokens, < baseline (2.5M)
},
expectedCapacityList: []dtypes.SerializableInt{
dtypes.NewInt(99_075_694_444), // recovered by 1.09/3600 * 250k = 75.694444 tokens
dtypes.NewInt(990_031_539_351), // recovered by 1.09/86400 * 2.5M = 31.539 tokens
},
},
"prev capacity < baseline, 18 decimals": {
bigTvl: big_testutil.Int64MulPow10(25, 24), // 25M tokens
limitParams: types.LimitParams{
Denom: testDenom,
Limiters: []types.Limiter{
// baseline = 25M * 1% = 250k tokens
{
Period: 3_600 * time.Second,
BaselineMinimum: dtypes.NewIntFromBigInt(
big_testutil.Int64MulPow10(100_000, 18), // 100k tokens(assuming 18 decimals)
),
BaselineTvlPpm: 10_000, // 1%
},
// baseline = 25M * 10% = 2.5M tokens
{
Period: 86_400 * time.Second,
BaselineMinimum: dtypes.NewIntFromBigInt(
big_testutil.Int64MulPow10(1_000_000, 18), // 1M tokens(assuming 18 decimals)
),
BaselineTvlPpm: 100_000, // 10%
},
},
},
timeSinceLastBlock: time.Second,
prevCapapcityList: []dtypes.SerializableInt{
dtypes.NewIntFromBigInt(
big_testutil.Int64MulPow10(99_000, 18),
), // 99k tokens < baseline (250k)
dtypes.NewIntFromBigInt(
big_testutil.Int64MulPow10(990_000, 18),
), // 0.99M tokens, < baseline (2.5M)
},
expectedCapacityList: []dtypes.SerializableInt{
dtypes.NewIntFromBigInt(
big_testutil.MustFirst(new(big.Int).SetString("99069444444444444444444", 10)),
), // recovered by 1/3600 * 250k ~= 69.4444 tokens
dtypes.NewIntFromBigInt(
big_testutil.MustFirst(new(big.Int).SetString("990028935185185185185185", 10)),
), // recovered by 1/86400 * 2.5M ~= 28.9351 tokens
},
},
"Prev capacity = 0": {
bigTvl: big.NewInt(1_000_000_000_000), // 1M token (assuming 6 decimals)
limitParams: types.LimitParams{
Denom: testDenom,
Limiters: []types.Limiter{
// baseline = baseline minimum = 100k tokens
{
Period: 3_600 * time.Second,
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals)
BaselineTvlPpm: 10_000, // 1%
},
// baseline = baseline minimum = 1M tokens
{
Period: 86_400 * time.Second,
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals)
BaselineTvlPpm: 100_000, // 10%
},
},
},
timeSinceLastBlock: time.Second + 150*time.Millisecond, // 1.15 second
prevCapapcityList: []dtypes.SerializableInt{
dtypes.NewInt(0),
dtypes.NewInt(0),
},
expectedCapacityList: []dtypes.SerializableInt{
dtypes.NewInt(31_944_444), // recovered by 1.15/3600 * 100k ~= 31.94
dtypes.NewInt(13_310_185), // recovered by 1.15/86400 * 1M ~= 13.31
},
},
"Prev capacity = 0, capacity_diff rounds down": {
bigTvl: big.NewInt(1_000_000_000_000), // 1M token (assuming 6 decimals)
limitParams: types.LimitParams{
Denom: testDenom,
Limiters: []types.Limiter{
// baseline = baseline minimum = 100k tokens
{
Period: 3_600 * time.Second,
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals)
BaselineTvlPpm: 10_000, // 1%
},
},
},
timeSinceLastBlock: 12 * time.Second, // 12 second
prevCapapcityList: []dtypes.SerializableInt{
dtypes.NewInt(0),
},
expectedCapacityList: []dtypes.SerializableInt{
dtypes.NewInt(333_333_333), // recovered by 12/3600 * 100k ~= 333.333
},
},
"Prev capacity = 2 * baseline, capacity_diff rounds down": {
bigTvl: big.NewInt(1_000_000_000_000), // 1M token (assuming 6 decimals)
limitParams: types.LimitParams{
Denom: testDenom,
Limiters: []types.Limiter{
// baseline = baseline minimum = 100k tokens
{
Period: 3_600 * time.Second,
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals)
BaselineTvlPpm: 10_000, // 1%
},
},
},
timeSinceLastBlock: 12 * time.Second, // 12 second
prevCapapcityList: []dtypes.SerializableInt{
dtypes.NewInt(200_000_000_000),
},
expectedCapacityList: []dtypes.SerializableInt{
dtypes.NewInt(199_666_666_667), // recovered by 12/3600 * 100k ~= 333.333
},
},
"baseline < prev capacity < 2 * baseline": {
bigTvl: big.NewInt(20_000_000_000_000), // 20M token (assuming 6 decimals)
limitParams: types.LimitParams{
Denom: testDenom,
Limiters: []types.Limiter{
// baseline = 200k tokens
{
Period: 3_600 * time.Second,
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals)
BaselineTvlPpm: 10_000, // 1%
},
// baseline = 2M tokens
{
Period: 86_400 * time.Second,
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals)
BaselineTvlPpm: 100_000, // 10%
},
},
},
timeSinceLastBlock: time.Second + 150*time.Millisecond, // 1.15 second
prevCapapcityList: []dtypes.SerializableInt{
dtypes.NewInt(329_000_000_000),
dtypes.NewInt(3_500_000_000_000),
},
expectedCapacityList: []dtypes.SerializableInt{
dtypes.NewInt(328_936_111_112), // recovered by 1.15/3600 * 200k ~= 63.89
dtypes.NewInt(3_499_973_379_630), // recovered by 1.15/86400 * 2M ~= 26.62
},
},
"prev capacity > 2 * baseline + capacity < baseline": {
bigTvl: big.NewInt(20_000_000_000_000), // 20M token (assuming 6 decimals)
limitParams: types.LimitParams{
Denom: testDenom,
Limiters: []types.Limiter{
// baseline = 200k tokens
{
Period: 3_600 * time.Second,
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals)
BaselineTvlPpm: 10_000, // 1%
},
// baseline = 2M tokens
{
Period: 86_400 * time.Second,
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals)
BaselineTvlPpm: 100_000, // 10%
},
},
},
timeSinceLastBlock: time.Second + 150*time.Millisecond, // 1.15 second
prevCapapcityList: []dtypes.SerializableInt{
dtypes.NewInt(629_000_000_000), // > 2 * baseline
dtypes.NewInt(1_200_000_000_000), // < baseline
},
expectedCapacityList: []dtypes.SerializableInt{
dtypes.NewInt(628_862_958_334), // recovered by 1.15/3600 * (629k - 200k) ~= 137.04
dtypes.NewInt(1_200_026_620_370), // recovered by 1.15/86400 * 2M ~= 26.62
},
},
"Error: len(capacityList) != len(limiters)": {
bigTvl: big.NewInt(25_000_000_000_000), // 25M token (assuming 6 decimals)
limitParams: types.LimitParams{
Denom: testDenom,
Limiters: []types.Limiter{
// baseline = 25M * 1% = 250k tokens
{
Period: 3_600 * time.Second,
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals)
BaselineTvlPpm: 10_000, // 1%
},
// baseline = 25M * 10% = 2.5M tokens
{
Period: 86_400 * time.Second,
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals)
BaselineTvlPpm: 100_000, // 10%
},
},
},
timeSinceLastBlock: time.Second + 90*time.Millisecond, // 1.09 second
prevCapapcityList: []dtypes.SerializableInt{
dtypes.NewInt(99_000_000_000),
dtypes.NewInt(990_000_000_000),
dtypes.NewInt(0),
},
expectedErr: errorsmod.Wrapf(
types.ErrMismatchedCapacityLimitersLength,
"denom = %v, len(limiters) = %v, len(prevCapacityList) = %v",
testDenom,
2,
3,
),
tests := map[string]struct {
bigTvl *big.Int
limitParams types.LimitParams
prevCapacityList []dtypes.SerializableInt
expectedCapacityList []dtypes.SerializableInt
timeSinceLastBlock time.Duration
expectedErr error
}{
"Prev capacity equals baseline": {
bigTvl: big.NewInt(25_000_000_000_000), // 25M token (assuming 6 decimals)
limitParams: types.LimitParams{
Denom: testDenom,
Limiters: []types.Limiter{
// baseline = 25M * 1% = 250k tokens
{
Period: 3_600 * time.Second,
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals)
BaselineTvlPpm: 10_000, // 1%
},
// baseline = 25M * 10% = 2.5M tokens
{
Period: 86_400 * time.Second,
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals)
BaselineTvlPpm: 100_000, // 10%
},
},
},
timeSinceLastBlock: time.Second,
prevCapacityList: []dtypes.SerializableInt{
dtypes.NewInt(250_000_000_000), // 250k tokens, which equals baseline
dtypes.NewInt(2_500_000_000_000), // 2.5M tokens, which equals baseline
},
expectedCapacityList: []dtypes.SerializableInt{
dtypes.NewInt(250_000_000_000), // 250k tokens
dtypes.NewInt(2_500_000_000_000), // 2.5M tokens
},
},
"Prev capacity < baseline": {
bigTvl: big.NewInt(25_000_000_000_000), // 25M token (assuming 6 decimals)
limitParams: types.LimitParams{
Denom: testDenom,
Limiters: []types.Limiter{
// baseline = 25M * 1% = 250k tokens
{
Period: 3_600 * time.Second,
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals)
BaselineTvlPpm: 10_000, // 1%
},
// baseline = 25M * 10% = 2.5M tokens
{
Period: 86_400 * time.Second,
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals)
BaselineTvlPpm: 100_000, // 10%
},
},
},
timeSinceLastBlock: time.Second + 90*time.Millisecond, // 1.09 second
prevCapacityList: []dtypes.SerializableInt{
dtypes.NewInt(99_000_000_000), // 99k tokens, < baseline (250k)
dtypes.NewInt(990_000_000_000), // 0.99M tokens, < baseline (2.5M)
},
expectedCapacityList: []dtypes.SerializableInt{
dtypes.NewInt(99_075_694_444), // recovered by 1.09/3600 * 250k = 75.694444 tokens
dtypes.NewInt(990_031_539_351), // recovered by 1.09/86400 * 2.5M = 31.539 tokens
},
},
"prev capacity < baseline, 18 decimals": {
bigTvl: big_testutil.Int64MulPow10(25, 24), // 25M tokens
limitParams: types.LimitParams{
Denom: testDenom,
Limiters: []types.Limiter{
// baseline = 25M * 1% = 250k tokens
{
Period: 3_600 * time.Second,
BaselineMinimum: dtypes.NewIntFromBigInt(
big_testutil.Int64MulPow10(100_000, 18), // 100k tokens(assuming 18 decimals)
),
BaselineTvlPpm: 10_000, // 1%
},
// baseline = 25M * 10% = 2.5M tokens
{
Period: 86_400 * time.Second,
BaselineMinimum: dtypes.NewIntFromBigInt(
big_testutil.Int64MulPow10(1_000_000, 18), // 1M tokens(assuming 18 decimals)
),
BaselineTvlPpm: 100_000, // 10%
},
},
},
timeSinceLastBlock: time.Second,
prevCapacityList: []dtypes.SerializableInt{
dtypes.NewIntFromBigInt(
big_testutil.Int64MulPow10(99_000, 18),
), // 99k tokens < baseline (250k)
dtypes.NewIntFromBigInt(
big_testutil.Int64MulPow10(990_000, 18),
), // 0.99M tokens, < baseline (2.5M)
},
expectedCapacityList: []dtypes.SerializableInt{
dtypes.NewIntFromBigInt(
big_testutil.MustFirst(new(big.Int).SetString("99069444444444444444444", 10)),
), // recovered by 1/3600 * 250k ~= 69.4444 tokens
dtypes.NewIntFromBigInt(
big_testutil.MustFirst(new(big.Int).SetString("990028935185185185185185", 10)),
), // recovered by 1/86400 * 2.5M ~= 28.9351 tokens
},
},
"Prev capacity = 0": {
bigTvl: big.NewInt(1_000_000_000_000), // 1M token (assuming 6 decimals)
limitParams: types.LimitParams{
Denom: testDenom,
Limiters: []types.Limiter{
// baseline = baseline minimum = 100k tokens
{
Period: 3_600 * time.Second,
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals)
BaselineTvlPpm: 10_000, // 1%
},
// baseline = baseline minimum = 1M tokens
{
Period: 86_400 * time.Second,
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals)
BaselineTvlPpm: 100_000, // 10%
},
},
},
timeSinceLastBlock: time.Second + 150*time.Millisecond, // 1.15 second
prevCapacityList: []dtypes.SerializableInt{
dtypes.NewInt(0),
dtypes.NewInt(0),
},
expectedCapacityList: []dtypes.SerializableInt{
dtypes.NewInt(31_944_444), // recovered by 1.15/3600 * 100k ~= 31.94
dtypes.NewInt(13_310_185), // recovered by 1.15/86400 * 1M ~= 13.31
},
},
"Prev capacity = 0, capacity_diff rounds down": {
bigTvl: big.NewInt(1_000_000_000_000), // 1M token (assuming 6 decimals)
limitParams: types.LimitParams{
Denom: testDenom,
Limiters: []types.Limiter{
// baseline = baseline minimum = 100k tokens
{
Period: 3_600 * time.Second,
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals)
BaselineTvlPpm: 10_000, // 1%
},
},
},
timeSinceLastBlock: 12 * time.Second, // 12 second
prevCapacityList: []dtypes.SerializableInt{
dtypes.NewInt(0),
},
expectedCapacityList: []dtypes.SerializableInt{
dtypes.NewInt(333_333_333), // recovered by 12/3600 * 100k ~= 333.333
},
},
"Prev capacity = 2 * baseline, capacity_diff rounds down": {
bigTvl: big.NewInt(1_000_000_000_000), // 1M token (assuming 6 decimals)
limitParams: types.LimitParams{
Denom: testDenom,
Limiters: []types.Limiter{
// baseline = baseline minimum = 100k tokens
{
Period: 3_600 * time.Second,
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals)
BaselineTvlPpm: 10_000, // 1%
},
},
},
timeSinceLastBlock: 12 * time.Second, // 12 second
prevCapacityList: []dtypes.SerializableInt{
dtypes.NewInt(200_000_000_000),
},
expectedCapacityList: []dtypes.SerializableInt{
dtypes.NewInt(199_666_666_667), // recovered by 12/3600 * 100k ~= 333.333
},
},
"baseline < prev capacity < 2 * baseline": {
bigTvl: big.NewInt(20_000_000_000_000), // 20M token (assuming 6 decimals)
limitParams: types.LimitParams{
Denom: testDenom,
Limiters: []types.Limiter{
// baseline = 200k tokens
{
Period: 3_600 * time.Second,
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals)
BaselineTvlPpm: 10_000, // 1%
},
// baseline = 2M tokens
{
Period: 86_400 * time.Second,
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals)
BaselineTvlPpm: 100_000, // 10%
},
},
},
timeSinceLastBlock: time.Second + 150*time.Millisecond, // 1.15 second
prevCapacityList: []dtypes.SerializableInt{
dtypes.NewInt(329_000_000_000),
dtypes.NewInt(3_500_000_000_000),
},
expectedCapacityList: []dtypes.SerializableInt{
dtypes.NewInt(328_936_111_112), // recovered by 1.15/3600 * 200k ~= 63.89
dtypes.NewInt(3_499_973_379_630), // recovered by 1.15/86400 * 2M ~= 26.62
},
},
"prev capacity > 2 * baseline + capacity < baseline": {
bigTvl: big.NewInt(20_000_000_000_000), // 20M token (assuming 6 decimals)
limitParams: types.LimitParams{
Denom: testDenom,
Limiters: []types.Limiter{
// baseline = 200k tokens
{
Period: 3_600 * time.Second,
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals)
BaselineTvlPpm: 10_000, // 1%
},
// baseline = 2M tokens
{
Period: 86_400 * time.Second,
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals)
BaselineTvlPpm: 100_000, // 10%
},
},
},
timeSinceLastBlock: time.Second + 150*time.Millisecond, // 1.15 second
prevCapacityList: []dtypes.SerializableInt{
dtypes.NewInt(629_000_000_000), // > 2 * baseline
dtypes.NewInt(1_200_000_000_000), // < baseline
},
expectedCapacityList: []dtypes.SerializableInt{
dtypes.NewInt(628_862_958_334), // recovered by 1.15/3600 * (629k - 200k) ~= 137.04
dtypes.NewInt(1_200_026_620_370), // recovered by 1.15/86400 * 2M ~= 26.62
},
},
"Error: len(capacityList) != len(limiters)": {
bigTvl: big.NewInt(25_000_000_000_000), // 25M token (assuming 6 decimals)
limitParams: types.LimitParams{
Denom: testDenom,
Limiters: []types.Limiter{
// baseline = 25M * 1% = 250k tokens
{
Period: 3_600 * time.Second,
BaselineMinimum: dtypes.NewInt(100_000_000_000), // 100k tokens (assuming 6 decimals)
BaselineTvlPpm: 10_000, // 1%
},
// baseline = 25M * 10% = 2.5M tokens
{
Period: 86_400 * time.Second,
BaselineMinimum: dtypes.NewInt(1_000_000_000_000), // 1M tokens (assuming 6 decimals)
BaselineTvlPpm: 100_000, // 10%
},
},
},
timeSinceLastBlock: time.Second + 90*time.Millisecond, // 1.09 second
prevCapacityList: []dtypes.SerializableInt{
dtypes.NewInt(99_000_000_000),
dtypes.NewInt(990_000_000_000),
dtypes.NewInt(0),
},
expectedErr: errorsmod.Wrapf(
types.ErrMismatchedCapacityLimitersLength,
"denom = %v, len(limiters) = %v, len(prevCapacityList) = %v",
testDenom,
2,
3,
),

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Status

Actionable comments generated: 0

Configuration used: CodeRabbit UI

Commits Files that changed from the base of the PR and between b13199a and 267d1ae.
Files selected for processing (1)
  • protocol/x/ratelimit/util/capacity_test.go (1 hunks)
Files skipped from review as they are similar to previous changes (1)
  • protocol/x/ratelimit/util/capacity_test.go

@teddyding teddyding merged commit c18a149 into main Jan 12, 2024
32 of 33 checks passed
@teddyding teddyding deleted the td/core-824-2 branch January 12, 2024 13:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

Successfully merging this pull request may close these issues.

3 participants