-
Notifications
You must be signed in to change notification settings - Fork 117
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[IND-461] handle deleveraging events #730
Conversation
WalkthroughThe changes introduce a new feature to handle deleveraging events in the dYdX trading system. This includes the creation of new interfaces, classes, and functions to process these events, update the relevant data structures, and generate the corresponding Kafka messages. The changes also involve the addition of new test cases to validate the functionality of the new feature. Additionally, new SQL scripts have been added to handle the database operations related to deleveraging events. Changes
TipsChat with CodeRabbit Bot (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 0
Configuration used: CodeRabbit UI
Files ignored due to filter (1)
- protocol/indexer/events/events.pb.go
Files selected for processing (18)
- indexer/packages/v4-protos/src/codegen/dydxprotocol/indexer/events/events.ts (2 hunks)
- indexer/services/ender/tests/helpers/constants.ts (2 hunks)
- indexer/services/ender/tests/validators/deleveraging-validator.test.ts (1 hunks)
- indexer/services/ender/src/handlers/deleveraging-handler.ts (1 hunks)
- indexer/services/ender/src/lib/block-processor.ts (2 hunks)
- indexer/services/ender/src/lib/types.ts (3 hunks)
- indexer/services/ender/src/validators/deleveraging-validator.ts (1 hunks)
- proto/dydxprotocol/indexer/events/events.proto (1 hunks)
- protocol/indexer/events/constants.go (3 hunks)
- protocol/indexer/events/deleveraging.go (1 hunks)
- protocol/indexer/events/deleveraging_test.go (1 hunks)
- protocol/mocks/MemClob.go (1 hunks)
- protocol/x/clob/keeper/deleveraging.go (4 hunks)
- protocol/x/clob/keeper/liquidations.go (3 hunks)
- protocol/x/clob/keeper/liquidations_test.go (1 hunks)
- protocol/x/clob/memclob/memclob.go (2 hunks)
- protocol/x/clob/types/liquidations_keeper.go (1 hunks)
- protocol/x/clob/types/memclob.go (1 hunks)
Files skipped from review due to trivial changes (5)
- indexer/services/ender/src/handlers/deleveraging-handler.ts
- indexer/services/ender/src/lib/types.ts
- indexer/services/ender/src/validators/deleveraging-validator.ts
- protocol/indexer/events/constants.go
- protocol/mocks/MemClob.go
Additional comments: 23
indexer/services/ender/__tests__/validators/deleveraging-validator.test.ts (1)
- 1-80: The test cases are well written and cover both the positive and negative scenarios. The use of
jest.spyOn
andjest.clearAllMocks
inbeforeEach
andafterEach
respectively ensures that the tests are isolated and do not interfere with each other. The use ofit.each
for testing multiple scenarios is a good practice. ThecreateBlock
function is a good abstraction that reduces code duplication.indexer/services/ender/src/lib/block-processor.ts (2)
6-11: The import of
DeleveragingValidator
is added correctly. Ensure that the file path is correct and the class is exported properly in thedeleveraging-validator
module.36-39: The
DeleveragingValidator
is added to theTXN_EVENT_SUBTYPE_VERSION_TO_VALIDATOR_MAPPING
object. This is a good practice as it allows the application to dynamically select the correct validator based on the event subtype and version.proto/dydxprotocol/indexer/events/events.proto (1)
- 165-187: The new
DeleveragingEventV1
message is well defined and includes all necessary fields to describe a deleveraging event. Ensure that all fields are correctly populated when creating aDeleveragingEventV1
object.protocol/x/clob/keeper/deleveraging.go (4)
27-39: The function
MaybeDeleverageSubaccount
now returns a new slicefills []types.MatchPerpetualDeleveraging_Fill
in addition to the existing return values. The return statement in the error case has been updated to returnnil
for the new slicefills
.46-49: The return statement in the case where the subaccount cannot be deleveraged has been updated to return
nil
for the new slicefills
.66-68: The function
MaybeDeleverageSubaccount
now assigns the return value ofk.MemClob.DeleverageSubaccount
to the new slicefills
.108-111: The function
MaybeDeleverageSubaccount
now returns the new slicefills
in addition to the existing return values.protocol/indexer/events/deleveraging_test.go (1)
- 1-40: The test case
TestNewDeleveragingEvent_Success
seems to be well written and covers the success scenario for the functionNewDeleveragingEvent
. It checks if the function correctly initializes theDeleveragingEventV1
struct. However, it would be beneficial to add more test cases to cover edge cases and failure scenarios. For example, what happens if invalid or null values are passed to theNewDeleveragingEvent
function?protocol/indexer/events/deleveraging.go (1)
- 1-28: The function
NewDeleveragingEvent
seems to be correctly implemented. It takes in the necessary parameters, converts the subaccount IDs to indexer subaccount IDs, and returns a newDeleveragingEventV1
struct. However, ensure that error handling is properly done in theSubaccountIdToIndexerSubaccountId
function and that it can handle any invalid inputs.protocol/x/clob/memclob/memclob.go (2)
698-698: Ensure that all calls to
DeleverageSubaccount
throughout the codebase have been updated to handle the new return valuefills []types.MatchPerpetualDeleveraging_Fill
.720-720: The function now returns
fills
along withquantumsDeleveraged
andnil
error. Make sure the returnedfills
are handled properly in the calling function.protocol/x/clob/types/memclob.go (1)
- 91-97: The new function
DeleverageSubaccount
is introduced in theMemClob
interface. Ensure that this function is implemented in all structs that implement this interface. Also, make sure to handle the returned error properly in all places where this function is called.protocol/x/clob/types/liquidations_keeper.go (1)
- 23-29: The new function
MaybeDeleverageSubaccount
is introduced in theLiquidationsKeeper
interface. Ensure that this function is implemented wherever this interface is used. Also, make sure to handle the returned error properly in the implementation to avoid any runtime errors.indexer/services/ender/__tests__/helpers/constants.ts (2)
26-34: The import of
DeleveragingEventV1
is added to handle the new deleveraging events.280-290: A new constant
defaultDeleveragingEvent
of typeDeleveragingEventV1
is added. This will be used in tests to simulate a deleveraging event.+export const defaultDeleveragingEvent: DeleveragingEventV1 = { + liquidated: defaultSenderSubaccountId, + offsetting: defaultRecipientSubaccountId, + clobPairId: 0, + fillAmount: Long.fromValue(10_000, true), + subticks: Long.fromValue(1_000_000_000, true), + isBuy: true, +};protocol/x/clob/keeper/liquidations.go (3)
6-7: Ensure that the newly imported packages are used correctly and do not introduce any breaking changes.
147-151: The function
MaybeDeleverageSubaccount
now has a new parameterisBuy
. Make sure that all calls to this function have been updated to include this new parameter. Also, ensure that the returnedfills
andclobPairId
are handled correctly.161-178: This new loop sends an on-chain deleveraging event for each fill. Ensure that this does not introduce any performance issues, especially when the
fills
slice is large. Also, verify that theindexerevents.NewDeleveragingEvent
function is used correctly and that the event is sent correctly.indexer/packages/v4-protos/src/codegen/dydxprotocol/indexer/events/events.ts (4)
447-474: The new
DeleveragingEventV1
interface is well defined and includes all necessary fields for a deleveraging event. Ensure that all fields are being correctly populated when this interface is used.481-502: The
DeleveragingEventV1SDKType
interface is a good addition for SDK users. It follows the same structure asDeleveragingEventV1
but with snake_case naming convention for properties, which is more common in JavaScript.1778-1787: The
createBaseDeleveragingEventV1
function is a good practice to initialize aDeleveragingEventV1
object with default values. This can prevent potential issues with undefined properties.1789-1871: The encoding and decoding functions for
DeleveragingEventV1
are correctly implemented. They handle all properties ofDeleveragingEventV1
and use the correct data types for encoding and decoding.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 0
Configuration used: CodeRabbit UI
Files selected for processing (1)
- protocol/mocks/ClobKeeper.go (1 hunks)
Files skipped from review due to trivial changes (1)
- protocol/mocks/ClobKeeper.go
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 1
Configuration used: CodeRabbit UI
Files ignored due to filter (1)
- protocol/indexer/events/events.pb.go
Files selected for processing (3)
- indexer/packages/v4-protos/src/codegen/dydxprotocol/indexer/events/events.ts (2 hunks)
- proto/dydxprotocol/indexer/events/events.proto (1 hunks)
- protocol/x/clob/keeper/liquidations.go (3 hunks)
Files skipped from review due to trivial changes (1)
- proto/dydxprotocol/indexer/events/events.proto
Additional comments: 6
protocol/x/clob/keeper/liquidations.go (2)
6-7: Ensure that the newly added packages
indexerevents
andindexer_manager
are correctly imported and used in the code. Also, verify that these packages are included in the project dependencies.158-178: The loop that sends on-chain deleveraging events for each fill seems fine. However, ensure that the
GetIndexerEventManager().AddTxnEvent
function can handle the potential volume of events generated by this loop, especially in scenarios wherefills
contains a large number of items.indexer/packages/v4-protos/src/codegen/dydxprotocol/indexer/events/events.ts (4)
447-477: The new
DeleveragingEventV1
interface is well defined and includes all necessary fields for a deleveraging event. Ensure that all fields are being correctly populated when this interface is used.484-508: The
DeleveragingEventV1SDKType
interface is a good addition for SDK users. It follows the same structure asDeleveragingEventV1
but with snake_case naming convention for properties, which is more common in JavaScript.1784-1793: The
createBaseDeleveragingEventV1
function is a good practice for creating a base object ofDeleveragingEventV1
. It sets default values for all properties.1795-1877: The
encode
,decode
, andfromPartial
functions are necessary for handlingDeleveragingEventV1
objects. They seem to be correctly implemented, but ensure they are thoroughly tested.
fills, _, err := k.MaybeDeleverageSubaccount(ctx, subaccountId, perpetualId) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The error handling here is good, but it would be helpful to include more context in the error message. Consider including the subaccountId
and perpetualId
in the error message to make debugging easier.
- "Failed to deleverage subaccount."
+ fmt.Sprintf("Failed to deleverage subaccount with ID: %s and perpetual ID: %s", subaccountId, perpetualId)
Commitable suggestion (Beta)
fills, _, err := k.MaybeDeleverageSubaccount(ctx, subaccountId, perpetualId) | |
fills, _, err := k.MaybeDeleverageSubaccount(ctx, subaccountId, perpetualId) | |
if err != nil { | |
err = fmt.Errorf("Failed to deleverage subaccount with ID: %s and perpetual ID: %s", subaccountId, perpetualId) | |
// Handle the error | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 3
Configuration used: CodeRabbit UI
Files ignored due to filter (1)
- protocol/indexer/events/events.pb.go
Files selected for processing (6)
- indexer/packages/postgres/src/types/fill-types.ts (1 hunks)
- indexer/packages/postgres/src/types/websocket-message-types.ts (1 hunks)
- indexer/packages/v4-protos/src/codegen/dydxprotocol/indexer/events/events.ts (2 hunks)
- indexer/services/ender/src/handlers/deleveraging-handler.ts (1 hunks)
- proto/dydxprotocol/indexer/events/events.proto (1 hunks)
- protocol/x/clob/keeper/liquidations.go (3 hunks)
Files skipped from review due to trivial changes (1)
- indexer/packages/postgres/src/types/fill-types.ts
Additional comments: 14
proto/dydxprotocol/indexer/events/events.proto (1)
- 165-188: The new
DeleveragingEventV1
message is well defined and includes all necessary fields to describe a deleveraging event. Ensure that all fields are correctly populated when creating instances of this message.indexer/packages/postgres/src/types/websocket-message-types.ts (1)
- 175-180: The
TradeContent
interface has been updated to include two optional properties:liquidation
anddeleveraging
. Ensure that these properties are being set correctly where this interface is used. Also, consider adding comments to describe what these properties represent.createdAt: IsoString, + // Indicates if the trade was part of a liquidation event liquidation?: boolean, + // Indicates if the trade was part of a deleveraging event deleveraging?: boolean, }indexer/packages/v4-protos/src/codegen/dydxprotocol/indexer/events/events.ts (4)
447-477: The new
DeleveragingEventV1
interface is well defined and includes all necessary fields for a deleveraging event. Ensure that all fields are being correctly populated when this interface is used.484-508: The
DeleveragingEventV1SDKType
interface is a good addition for SDK users. It follows the same structure asDeleveragingEventV1
but with snake_case field names, which is a common convention in JavaScript SDKs.1784-1793: The
createBaseDeleveragingEventV1
function is a good practice for creating a base object with default values. This can help prevent null or undefined errors.1796-1877: The
encode
,decode
, andfromPartial
functions are correctly implemented for theDeleveragingEventV1
interface. They handle all fields and use appropriate methods for encoding/decoding.protocol/x/clob/keeper/liquidations.go (2)
6-7: The new import statements are correctly placed and follow the Go convention of grouping standard library imports and third-party imports separately.
147-148: The function
MaybeDeleverageSubaccount
now returns an additional valuefills
. Ensure that all calls to this function throughout the codebase have been updated to handle the new return value.indexer/services/ender/src/handlers/deleveraging-handler.ts (6)
89-123: The
getLatestPerpetualPosition
function retrieves the latest perpetual position for a given event. It throws an error if it cannot find an existing perpetual position. This is a good practice as it ensures that the function always returns a valid perpetual position or fails explicitly.125-133: The
getOrderSide
function determines the order side based on theevent.isBuy
property and whether the event is a deleveraging event. This function is straightforward and correctly implements the logic described.135-194: The
updatePerpetualPosition
function updates the perpetual position based on the event data. It first retrieves the latest perpetual position and then updates either thesumOpen
andentryPrice
fields or thesumClose
andexitPrice
fields based on the order side. The function throws an error if it cannot update the perpetual position. This is a good practice as it ensures that the function always updates the perpetual position or fails explicitly.196-219: The
generateConsolidatedKafkaEvent
function generates a Kafka event for a given subaccount, position, fill, and perpetual market. The function correctly constructs the Kafka event message and calls thegenerateConsolidatedSubaccountKafkaEvent
function to generate the event.221-240: The
generateTradeKafkaEventFromDeleveraging
function generates a Kafka event for a given fill. The function correctly constructs the trade contents and calls thegenerateConsolidatedTradeKafkaEvent
function to generate the event.242-290: The
internalHandle
function handles the deleveraging event. It first retrieves the perpetual market for the event'sclobPairId
. If it cannot find the perpetual market, it throws an error. It then creates fills from the event and updates the perpetual positions. Finally, it generates Kafka events for the liquidated and offsetting subaccounts and the trade. The function correctly implements the logic for handling the deleveraging event.
clobPairId := k.mustGetClobPairForPerpetualId(ctx, perpetualId).Id | ||
subticks := liquidationOrder.GetOrderSubticks() | ||
isBuy := liquidationOrder.IsBuy() | ||
// Send on-chain deleveraging event for each fill. | ||
for _, fill := range fills { | ||
k.GetIndexerEventManager().AddTxnEvent( | ||
ctx, | ||
indexerevents.SubtypeDeleveraging, | ||
indexerevents.DeleveragingEventVersion, | ||
indexer_manager.GetBytes( | ||
indexerevents.NewDeleveragingEvent( | ||
subaccountId, | ||
fill.OffsettingSubaccountId, | ||
clobPairId, | ||
satypes.BaseQuantums(fill.FillAmount), | ||
uint64(subticks), | ||
isBuy, | ||
), | ||
), | ||
) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The loop added to process each fill returned by MaybeDeleverageSubaccount
is correctly implemented. It creates an on-chain deleveraging event for each fill and adds it to the IndexerEventManager
. However, error handling is missing for the AddTxnEvent
function. If this function can throw an error, it should be handled appropriately.
for _, fill := range fills {
err := k.GetIndexerEventManager().AddTxnEvent(
ctx,
indexerevents.SubtypeDeleveraging,
indexerevents.DeleveragingEventVersion,
indexer_manager.GetBytes(
indexerevents.NewDeleveragingEvent(
subaccountId,
fill.OffsettingSubaccountId,
clobPairId,
satypes.BaseQuantums(fill.FillAmount),
uint64(subticks),
isBuy,
),
),
)
if err != nil {
return err
}
}
Commitable suggestion (Beta)
clobPairId := k.mustGetClobPairForPerpetualId(ctx, perpetualId).Id | |
subticks := liquidationOrder.GetOrderSubticks() | |
isBuy := liquidationOrder.IsBuy() | |
// Send on-chain deleveraging event for each fill. | |
for _, fill := range fills { | |
k.GetIndexerEventManager().AddTxnEvent( | |
ctx, | |
indexerevents.SubtypeDeleveraging, | |
indexerevents.DeleveragingEventVersion, | |
indexer_manager.GetBytes( | |
indexerevents.NewDeleveragingEvent( | |
subaccountId, | |
fill.OffsettingSubaccountId, | |
clobPairId, | |
satypes.BaseQuantums(fill.FillAmount), | |
uint64(subticks), | |
isBuy, | |
), | |
), | |
) | |
} | |
for _, fill := range fills { | |
err := k.GetIndexerEventManager().AddTxnEvent( | |
ctx, | |
indexerevents.SubtypeDeleveraging, | |
indexerevents.DeleveragingEventVersion, | |
indexer_manager.GetBytes( | |
indexerevents.NewDeleveragingEvent( | |
subaccountId, | |
fill.OffsettingSubaccountId, | |
clobPairId, | |
satypes.BaseQuantums(fill.FillAmount), | |
uint64(subticks), | |
isBuy, | |
), | |
), | |
) | |
if err != nil { | |
return err | |
} | |
} |
protected createFillsFromEvent( | ||
perpetualMarket: PerpetualMarketFromDatabase, | ||
event: DeleveragingEventV1, | ||
): Promise<FillFromDatabase>[] { | ||
const eventId: Buffer = TendermintEventTable.createEventId( | ||
this.block.height.toString(), | ||
indexerTendermintEventToTransactionIndex(this.indexerTendermintEvent), | ||
this.indexerTendermintEvent.eventIndex, | ||
); | ||
const size: string = protocolTranslations.quantumsToHumanFixedString( | ||
event.fillAmount.toString(), | ||
perpetualMarket.atomicResolution, | ||
); | ||
const price: string = protocolTranslations.subticksToPrice( | ||
event.subticks.toString(10), | ||
perpetualMarket, | ||
); | ||
const transactionIndex: number = indexerTendermintEventToTransactionIndex( | ||
this.indexerTendermintEvent, | ||
); | ||
|
||
const liquidatedSubaccountFill: FillCreateObject = { | ||
subaccountId: SubaccountTable.uuid(event.liquidated!.owner, event.liquidated!.number), | ||
side: event.isBuy ? OrderSide.BUY : OrderSide.SELL, | ||
liquidity: Liquidity.TAKER, | ||
type: FillType.DELEVERAGED, | ||
clobPairId: event.clobPairId.toString(), | ||
size, | ||
price, | ||
quoteAmount: Big(size).times(price).toFixed(), | ||
eventId, | ||
transactionHash: this.block.txHashes[transactionIndex], | ||
createdAt: this.timestamp.toISO(), | ||
createdAtHeight: this.block.height.toString(), | ||
fee: '0', | ||
}; | ||
const offsettingSubaccountFill: FillCreateObject = { | ||
...liquidatedSubaccountFill, | ||
side: event.isBuy ? OrderSide.SELL : OrderSide.BUY, | ||
liquidity: Liquidity.MAKER, | ||
type: FillType.OFFSETTING, | ||
}; | ||
|
||
return [ | ||
FillTable.create(liquidatedSubaccountFill, { txId: this.txId }), | ||
FillTable.create(offsettingSubaccountFill, { txId: this.txId }), | ||
]; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The createFillsFromEvent
function creates two fill objects: liquidatedSubaccountFill
and offsettingSubaccountFill
. The function assumes that the event.liquidated
and event.offsetting
properties are not null. Consider adding null checks for these properties to avoid potential runtime errors.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 1
Configuration used: CodeRabbit UI
Files ignored due to filter (1)
- indexer/services/comlink/public/swagger.json
Files selected for processing (16)
- indexer/packages/postgres/src/db/migrations/migration_files/20220727134954_create_fills.ts (1 hunks)
- indexer/packages/postgres/src/db/migrations/migration_files/20231101173444_update_fills_type_with_deleverage.ts (1 hunks)
- indexer/services/comlink/public/api-documentation.md (1 hunks)
- indexer/services/ender/tests/handlers/deleveraging-handler.test.ts (1 hunks)
- indexer/services/ender/tests/handlers/order-fills/liquidation-handler.test.ts (2 hunks)
- indexer/services/ender/tests/handlers/order-fills/order-handler.test.ts (2 hunks)
- indexer/services/ender/tests/helpers/constants.ts (2 hunks)
- indexer/services/ender/tests/helpers/indexer-proto-helpers.ts (4 hunks)
- indexer/services/ender/tests/lib/on-message.test.ts (6 hunks)
- indexer/services/ender/src/constants.ts (1 hunks)
- indexer/services/ender/src/handlers/deleveraging-handler.ts (1 hunks)
- indexer/services/ender/src/handlers/order-fills/abstract-order-fill-handler.ts (2 hunks)
- indexer/services/ender/src/handlers/order-fills/liquidation-handler.ts (3 hunks)
- indexer/services/ender/src/handlers/order-fills/order-handler.ts (2 hunks)
- indexer/services/ender/src/helpers/kafka-helper.ts (1 hunks)
- indexer/services/ender/src/lib/helper.ts (2 hunks)
Files skipped from review due to trivial changes (5)
- indexer/packages/postgres/src/db/migrations/migration_files/20220727134954_create_fills.ts
- indexer/services/comlink/public/api-documentation.md
- indexer/services/ender/tests/handlers/order-fills/liquidation-handler.test.ts
- indexer/services/ender/src/helpers/kafka-helper.ts
- indexer/services/ender/src/lib/helper.ts
Additional comments: 32
indexer/packages/postgres/src/db/migrations/migration_files/20231101173444_update_fills_type_with_deleverage.ts (1)
- 1-23: The migration script looks good. It correctly drops the old constraint and adds a new one with the updated values. The down function also correctly reverts the changes. However, please ensure that the new values 'DELEVERAGED' and 'OFFSETTING' are handled correctly in the rest of your application.
indexer/services/ender/__tests__/handlers/order-fills/order-handler.test.ts (2)
45-48: The new constant
DELEVERAGING_EVENT_TYPE
is imported correctly. Ensure that it is defined in the imported module.200-200: The new event type
DELEVERAGING_EVENT_TYPE
is added to the array of event types. Ensure that the event handling logic is updated to handle this new event type.indexer/services/ender/src/constants.ts (1)
- 13-19: The new constant
DELEVERAGING_EVENT_TYPE
is introduced correctly and the comment provides clear context about its usage. Ensure that this constant is used correctly in the rest of the codebase where deleveraging events are processed.indexer/services/ender/src/handlers/order-fills/abstract-order-fill-handler.ts (2)
43-48: The import of
isDeleveraging
function is added. Ensure that this function is correctly implemented and tested.425-429: The
deleveraging
property is added to theTradeMessageContents
object. Ensure that theisDeleveraging
function correctly identifies deleveraging events.+ deleveraging: isDeleveraging(fill),
indexer/services/ender/src/handlers/order-fills/order-handler.ts (2)
25-35: The new constant
DELEVERAGING_EVENT_TYPE
is imported correctly.61-67: The new event type
${DELEVERAGING_EVENT_TYPE}_${subaccountUuid}
is added to the array of event types in theOrderHandler
class. This ensures that DeleveragingEvents for the same subaccount are not processed in parallel.indexer/services/ender/__tests__/handlers/deleveraging-handler.test.ts (1)
- 1-329: The changes in this hunk are extensive and cover a wide range of updates to the
DeleveragingHandler
class. The updates include new imports, updates to thegetParallelizationIds
method, addition of a new test case, and updates to thecreates fills and updates perpetual positions
test case. The changes involve creating fills and updating perpetual positions based on a deleveraging event.Ensure that the new test cases cover all possible scenarios and edge cases. Also, verify that the updated
getParallelizationIds
method returns the correct parallelization IDs.indexer/services/ender/__tests__/helpers/constants.ts (2)
26-34: The import of
DeleveragingEventV1
is added to handle the new deleveraging events. Ensure that this import is used correctly throughout the code.280-290: A new constant
defaultDeleveragingEvent
of typeDeleveragingEventV1
is added. This object is used as a default value for deleveraging events. Ensure that all the properties of this object are correctly set and used.export const defaultDeleveragingEvent: DeleveragingEventV1 = { liquidated: defaultSenderSubaccountId, offsetting: defaultRecipientSubaccountId, clobPairId: 1, fillAmount: Long.fromValue(10_000, true), subticks: Long.fromValue(1_000_000_000, true), isBuy: true, };indexer/services/ender/src/handlers/order-fills/liquidation-handler.ts (3)
16-26: The import statement has been updated to include a new constant
DELEVERAGING_EVENT_TYPE
from the../../constants
module. Ensure that this constant is correctly defined in theconstants
module.54-60: Two new parallelization IDs have been added to the array
parallelizationIds
in theLiquidationHandler
class:${DELEVERAGING_EVENT_TYPE}_${subaccountUuid}
. This is to ensure that DeleveragingEvents for the same subaccount are not processed in parallel. This is a good practice to avoid race conditions.68-74: The logic for handling
liquidationOrder
has been modified to include the new parallelization ID${DELEVERAGING_EVENT_TYPE}_${subaccountUuid}
. This is to ensure that DeleveragingEvents for the same subaccount are not processed in parallel. This is a good practice to avoid race conditions.indexer/services/ender/src/handlers/deleveraging-handler.ts (8)
37-54: The
getParallelizationIds
function generates a list of IDs for parallel processing. Ensure that the logic for generating these IDs is correct and that it doesn't lead to any race conditions or other concurrency issues.56-104: The
createFillsFromEvent
function creates fill objects from the event data. Ensure that the logic for creating these objects is correct and that all necessary fields are included.106-140: The
getLatestPerpetualPosition
function retrieves the latest perpetual position. It throws an error if it can't find an existing perpetual position. Ensure that this error handling is appropriate and that it won't lead to any unexpected behavior.142-150: The
getOrderSide
function determines the order side based on whether the account was deleveraged. Ensure that this logic is correct.152-211: The
updatePerpetualPosition
function updates the perpetual position. It throws an error if it can't update the position. Ensure that this error handling is appropriate and that it won't lead to any unexpected behavior.213-236: The
generateConsolidatedKafkaEvent
function generates a consolidated Kafka event. Ensure that the logic for generating this event is correct and that all necessary fields are included.238-258: The
generateTradeKafkaEventFromDeleveraging
function generates a trade Kafka event from a deleveraging event. Ensure that the logic for generating this event is correct and that all necessary fields are included.260-308: The
internalHandle
function handles the deleveraging event. It throws an error if it can't find the perpetual market. Ensure that this error handling is appropriate and that it won't lead to any unexpected behavior. Also, ensure that the logic for handling the event is correct.indexer/services/ender/__tests__/helpers/indexer-proto-helpers.ts (4)
51-51: The import of
DeleveragingEventV1
is necessary for the new functioncreateKafkaMessageFromDeleveragingEvent
.60-60: The import of
isDeleveraging
is necessary for the updated functionexpectDefaultTradeKafkaMessageFromTakerFillId
.475-508: The function
createKafkaMessageFromDeleveragingEvent
is well-structured and correctly uses the importedDeleveragingEventV1
type. It creates a Kafka message from a deleveraging event, which is a new feature introduced in this PR.810-810: The
deleveraging
property is correctly added to the output of theexpectDefaultTradeKafkaMessageFromTakerFillId
function. It uses the importedisDeleveraging
function to determine if the taker fill is a deleveraging event.indexer/services/ender/__tests__/lib/on-message.test.ts (6)
21-24: The import of
DeleveragingEventV1
is correct and necessary for the tests.43-46: The addition of
defaultDeleveragingEvent
is correct and necessary for the tests.54-59: The mocking of
DeleveragingHandler
is correct and necessary for the tests.87-91: The mock return value for
DeleveragingHandler
is correct and necessary for the tests.165-168: The creation of
defaultDeleveragingEventBinary
is correct and necessary for the tests.385-440: The test case for processing a block with a deleveraging event is correctly implemented. It creates necessary database entries, prepares a block with a deleveraging event, calls
onMessage
with the block, and checks the expected outcomes.
export class DeleveragingHandler extends Handler<DeleveragingEventV1> { | ||
eventType: string = 'DeleveragingEvent'; | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The eventType
property is hardcoded as 'DeleveragingEvent'. This is fine if the class is only intended to handle this specific event type. However, if the class is expected to handle multiple event types, consider making eventType
a parameter in the constructor.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 0
Configuration used: CodeRabbit UI
Files selected for processing (4)
- indexer/services/ender/tests/handlers/order-fills/liquidation-handler.test.ts (2 hunks)
- indexer/services/ender/tests/handlers/order-fills/order-handler.test.ts (2 hunks)
- indexer/services/ender/src/handlers/order-fills/order-handler.ts (2 hunks)
- protocol/x/clob/keeper/deleveraging.go (4 hunks)
Files skipped from review due to trivial changes (1)
- indexer/services/ender/src/handlers/order-fills/order-handler.ts
Additional comments: 6
indexer/services/ender/__tests__/handlers/order-fills/order-handler.test.ts (2)
45-45: The new constant
DELEVERAGING_EVENT_TYPE
is imported correctly. Ensure that it is defined in the imported module.200-200: The
DELEVERAGING_EVENT_TYPE
is used correctly in the test case. Ensure that the test case is updated to handle this new event type.protocol/x/clob/keeper/deleveraging.go (2)
29-35: The function
MaybeDeleverageSubaccount
now returns a new slicefills []types.MatchPerpetualDeleveraging_Fill
in addition to the existing return values. Ensure that all calls to this function throughout the codebase have been updated to handle the new return value.113-113: The function
MaybeDeleverageSubaccount
now returns a new slicefills []types.MatchPerpetualDeleveraging_Fill
in addition to the existing return values. Ensure that all calls to this function throughout the codebase have been updated to handle the new return value.indexer/services/ender/__tests__/handlers/order-fills/liquidation-handler.test.ts (2)
42-46: The new constant
DELEVERAGING_EVENT_TYPE
is imported correctly. Ensure that it is defined correctly in the constants file.199-207: The
DELEVERAGING_EVENT_TYPE
is added to theparallelizationIds
array. This is consistent with the PR summary stating that a newparallelizationIds
entry forDELEVERAGING_EVENT_TYPE
was added.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 0
Configuration used: CodeRabbit UI
Files selected for processing (2)
- indexer/services/ender/tests/handlers/deleveraging-handler.test.ts (1 hunks)
- indexer/services/ender/src/handlers/deleveraging-handler.ts (1 hunks)
Additional comments: 8
indexer/services/ender/__tests__/handlers/deleveraging-handler.test.ts (1)
- 1-340: The test cases for the
DeleveragingHandler
class seem comprehensive and cover various scenarios. However, ensure that all edge cases and error scenarios are covered. Also, consider adding tests for concurrency issues if applicable.indexer/services/ender/src/handlers/deleveraging-handler.ts (7)
37-65: The
getParallelizationIds
function has been updated to include additional parallelization IDs to ensure that certain events for the same subaccount or order are not processed in parallel. This is a good practice to prevent race conditions. However, ensure that the additional parallelization does not significantly impact the performance of the system.67-115: The
createFillsFromEvent
function has been introduced to create fill objects from a deleveraging event. It seems to be correctly creating the fill objects for both the liquidated and offsetting subaccounts. However, ensure that theFillTable.create
function is atomic and handles any potential database errors.117-151: The
getLatestPerpetualPosition
function has been introduced to fetch the latest perpetual position for a given subaccount. It correctly throws an error if no perpetual position is found. However, ensure that the error is handled upstream to prevent the application from crashing.163-222: The
updatePerpetualPosition
function has been introduced to update the perpetual position of a subaccount based on a deleveraging event. It correctly updates either the sumOpen and entryPrice or the sumClose and exitPrice based on the side of the order. However, ensure that thePerpetualPositionTable.update
function is atomic and handles any potential database errors.224-247: The
generateConsolidatedKafkaEvent
function has been introduced to generate a consolidated Kafka event for a given subaccount. It correctly generates the event based on the fill and position of the subaccount. However, ensure that the Kafka event is correctly consumed and processed by the relevant services.249-269: The
generateTradeKafkaEventFromDeleveraging
function has been introduced to generate a trade Kafka event from a deleveraging event. It correctly generates the event based on the fill of the trade. However, ensure that the Kafka event is correctly consumed and processed by the relevant services.271-319: The
internalHandle
function has been updated to use the new functionality introduced in this hunk. It correctly creates the fills, updates the perpetual positions, and generates the Kafka events for the deleveraging event. However, ensure that any errors thrown by the new functions are correctly handled in this function to prevent the application from crashing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 10
Configuration used: CodeRabbit UI
Files selected for processing (5)
- indexer/services/ender/tests/handlers/deleveraging-handler.test.ts (1 hunks)
- indexer/services/ender/src/handlers/deleveraging-handler.ts (1 hunks)
- indexer/services/ender/src/helpers/postgres/postgres-functions.ts (2 hunks)
- indexer/services/ender/src/scripts/dydx_deleveraging_handler.sql (1 hunks)
- indexer/services/ender/src/scripts/dydx_update_perpetual_position.sql (1 hunks)
Additional comments: 7
indexer/services/ender/src/helpers/postgres/postgres-functions.ts (2)
31-44: Ensure that the new SQL scripts 'dydx_deleveraging_handler.sql' and 'dydx_from_protocol_is_buy.sql' are correctly implemented and tested. Also, verify that they are correctly loaded and executed in the
createPostgresFunctions
function.57-63: Ensure that the new stored procedure 'dydx_update_perpetual_position.sql' is correctly implemented and tested. Also, verify that it is correctly loaded and executed in the
createPostgresFunctions
function.indexer/services/ender/src/scripts/dydx_update_perpetual_position.sql (1)
- 1-61: The function
dydx_update_perpetual_position
seems to be well-structured and follows good practices for SQL functions. It retrieves the latest position record, updates the position based on the side, and performs the update in the database. The function returns the updated perpetual position record. However, please ensure that the functionsdydx_perpetual_position_and_order_side_matching
,dydx_trim_scale
, anddydx_get_weighted_average
used in this function are defined and work as expected. Also, ensure that the tableperpetual_positions
and its columns exist and have the correct data types.indexer/services/ender/src/scripts/dydx_deleveraging_handler.sql (2)
41-48: Good use of exception handling to ensure data integrity and provide meaningful error messages.
121-132: The returned JSON object is well-structured and provides a comprehensive view of the deleveraging event.
indexer/services/ender/src/handlers/deleveraging-handler.ts (2)
28-56: Ensure that the
getParallelizationIds
function correctly handles the new event types and that the event types are correctly defined in the constants file.106-159: The
internalHandle
function has been significantly modified. Ensure that the SQL queries are correctly formed and that the error handling is robust. Also, verify that the Kafka events are correctly generated and sent.
liquidated_perpetual_position_record = dydx_update_perpetual_position( | ||
liquidated_subaccount_uuid, | ||
perpetual_id, | ||
liquidated_side, | ||
size, | ||
price); | ||
offsetting_perpetual_position_record = dydx_update_perpetual_position( | ||
offsetting_subaccount_uuid, | ||
perpetual_id, | ||
offsetting_side, | ||
size, | ||
price); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The dydx_update_perpetual_position
function is called twice with similar arguments. If the function is idempotent and doesn't rely on the state of the database between calls, consider creating a loop to handle these operations to reduce code duplication.
defaultHeight, | ||
transactionIndex, | ||
eventIndex, | ||
); | ||
|
||
// This size should be in fixed-point notation rather than exponential notation. | ||
const quoteAmount: string = '0.00000000001'; // quote amount is price * fillAmount = 1e3 * 1e-14 = 1e-11 | ||
const totalFilled: string = '0.00000000000001'; // fillAmount in human = 10^4*10^-18=10^-14 | ||
const price: string = '1000'; // 10^9*10^-6=10^3 | ||
const perpetualMarket: PerpetualMarketFromDatabase | undefined = perpetualMarketRefresher | ||
.getPerpetualMarketFromId( | ||
defaultDeleveragingEvent.perpetualId.toString(), | ||
); | ||
|
||
await expectFillInDatabase({ | ||
subaccountId: SubaccountTable.subaccountIdToUuid(defaultDeleveragingEvent.offsetting!), | ||
clientId: '0', | ||
liquidity: Liquidity.MAKER, | ||
size: totalFilled, | ||
price, | ||
quoteAmount, | ||
eventId, | ||
transactionHash: defaultTxHash, | ||
createdAt: defaultDateTime.toISO(), | ||
createdAtHeight: defaultHeight, | ||
type: FillType.OFFSETTING, | ||
clobPairId: perpetualMarket!.clobPairId, | ||
side: OrderSide.BUY, | ||
orderFlags: '0', | ||
clientMetadata: null, | ||
hasOrderId: false, | ||
fee: '0', | ||
}); | ||
await expectFillInDatabase({ | ||
subaccountId: SubaccountTable.subaccountIdToUuid(defaultDeleveragingEvent.liquidated!), | ||
clientId: '0', | ||
liquidity: Liquidity.TAKER, | ||
size: totalFilled, | ||
price, | ||
quoteAmount, | ||
eventId, | ||
transactionHash: defaultTxHash, | ||
createdAt: defaultDateTime.toISO(), | ||
createdAtHeight: defaultHeight, | ||
type: FillType.DELEVERAGED, | ||
clobPairId: perpetualMarket!.clobPairId, | ||
side: OrderSide.SELL, | ||
orderFlags: '0', | ||
clientMetadata: null, | ||
hasOrderId: false, | ||
fee: '0', | ||
}); | ||
|
||
await Promise.all([ | ||
expectFillsAndPositionsSubaccountKafkaMessages( | ||
producerSendMock, | ||
eventId, | ||
true, | ||
), | ||
expectFillsAndPositionsSubaccountKafkaMessages( | ||
producerSendMock, | ||
eventId, | ||
false, | ||
), | ||
expectDefaultTradeKafkaMessageFromTakerFillId( | ||
producerSendMock, | ||
eventId, | ||
), | ||
]); | ||
}); | ||
|
||
async function expectFillsAndPositionsSubaccountKafkaMessages( | ||
producerSendMock: jest.SpyInstance, | ||
eventId: Buffer, | ||
deleveraged: boolean, | ||
) { | ||
const subaccountId: IndexerSubaccountId = deleveraged | ||
? defaultDeleveragingEvent.liquidated! : defaultDeleveragingEvent.offsetting!; | ||
const liquidity: Liquidity = deleveraged ? Liquidity.TAKER : Liquidity.MAKER; | ||
const positionId: string = ( | ||
await PerpetualPositionTable.findOpenPositionForSubaccountPerpetual( | ||
SubaccountTable.subaccountIdToUuid(subaccountId), | ||
testConstants.defaultPerpetualMarket2.id, | ||
) | ||
)!.id; | ||
|
||
await Promise.all([ | ||
expectFillSubaccountKafkaMessageFromLiquidationEvent( | ||
producerSendMock, | ||
subaccountId, | ||
FillTable.uuid(eventId, liquidity), | ||
positionId, | ||
defaultHeight, | ||
transactionIndex, | ||
eventIndex, | ||
testConstants.defaultPerpetualMarket2.ticker, | ||
), | ||
]); | ||
} | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The test cases seem to be well-structured and cover different scenarios. However, it would be beneficial to add more comments to explain the logic behind each test case, especially for complex ones. This would improve the maintainability of the code.
const offsettingSubaccountId: IndexerSubaccountId = defaultDeleveragingEvent.offsetting!; | ||
const deleveragedSubaccountId: IndexerSubaccountId = defaultDeleveragingEvent.liquidated!; | ||
|
||
const indexerTendermintEvent: IndexerTendermintEvent = createIndexerTendermintEvent( | ||
DydxIndexerSubtypes.DELEVERAGING, | ||
DeleveragingEventV1.encode(defaultDeleveragingEvent).finish(), | ||
transactionIndex, | ||
eventIndex, | ||
); | ||
const block: IndexerTendermintBlock = createIndexerTendermintBlock( | ||
0, | ||
defaultTime, | ||
[indexerTendermintEvent], | ||
[defaultTxHash], | ||
); | ||
|
||
const handler: DeleveragingHandler = new DeleveragingHandler( | ||
block, | ||
indexerTendermintEvent, | ||
0, | ||
defaultDeleveragingEvent, | ||
); | ||
|
||
const offsettingSubaccountUuid: string = SubaccountTable.subaccountIdToUuid( | ||
offsettingSubaccountId, | ||
); | ||
const deleveragedSubaccountUuid: string = SubaccountTable.subaccountIdToUuid( | ||
deleveragedSubaccountId, | ||
); | ||
|
||
const perpetualMarket: PerpetualMarketFromDatabase | undefined = perpetualMarketRefresher | ||
.getPerpetualMarketFromId( | ||
defaultDeleveragingEvent.perpetualId.toString(), | ||
); | ||
expect(perpetualMarket).toBeDefined(); | ||
|
||
expect(handler.getParallelizationIds()).toEqual([ | ||
`${handler.eventType}_${offsettingSubaccountUuid}_${perpetualMarket!.clobPairId}`, | ||
`${handler.eventType}_${deleveragedSubaccountUuid}_${perpetualMarket!.clobPairId}`, | ||
// To ensure that SubaccountUpdateEvents and OrderFillEvents for the same subaccount are not | ||
// processed in parallel | ||
`${SUBACCOUNT_ORDER_FILL_EVENT_TYPE}_${offsettingSubaccountUuid}`, | ||
`${SUBACCOUNT_ORDER_FILL_EVENT_TYPE}_${deleveragedSubaccountUuid}`, | ||
// To ensure that StatefulOrderEvents and OrderFillEvents for the same order are not | ||
// processed in parallel | ||
`${DELEVERAGING_EVENT_TYPE}_${offsettingSubaccountUuid}`, | ||
`${DELEVERAGING_EVENT_TYPE}_${deleveragedSubaccountUuid}`, | ||
]); | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test case checks the getParallelizationIds
function of the DeleveragingHandler
class. It seems to be correctly set up and the assertions look valid. However, it would be beneficial to add a comment explaining what this test case is checking and why.
it('DeleveragingEvent fails validation', async () => { | ||
const deleveragingEvent: DeleveragingEventV1 = DeleveragingEventV1 | ||
.fromPartial({ // no liquidated subaccount | ||
...defaultDeleveragingEvent, | ||
liquidated: undefined, | ||
}); | ||
const kafkaMessage: KafkaMessage = createKafkaMessageFromDeleveragingEvent({ | ||
deleveragingEvent, | ||
transactionIndex, | ||
eventIndex, | ||
height: parseInt(defaultHeight, 10), | ||
time: defaultTime, | ||
txHash: defaultTxHash, | ||
}); | ||
const loggerCrit = jest.spyOn(logger, 'crit'); | ||
await expect(onMessage(kafkaMessage)).rejects.toThrowError(); | ||
|
||
expect(loggerCrit).toHaveBeenCalledWith(expect.objectContaining({ | ||
at: 'onMessage#onMessage', | ||
message: 'Error: Unable to parse message, this must be due to a bug in V4 node', | ||
})); | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test case checks that the DeleveragingEvent
fails validation when the liquidated
subaccount is undefined. The setup and assertions look correct. However, it would be beneficial to add a comment explaining what this test case is checking and why.
it('creates fills and updates perpetual positions', async () => { | ||
const kafkaMessage: KafkaMessage = createKafkaMessageFromDeleveragingEvent({ | ||
deleveragingEvent: defaultDeleveragingEvent, | ||
transactionIndex, | ||
eventIndex, | ||
height: parseInt(defaultHeight, 10), | ||
time: defaultTime, | ||
txHash: defaultTxHash, | ||
}); | ||
|
||
// create initial Subaccounts | ||
await Promise.all([ | ||
SubaccountTable.create(offsettingSubaccount), | ||
SubaccountTable.create(deleveragedSubaccount), | ||
]); | ||
// create initial PerpetualPositions | ||
await Promise.all([ | ||
PerpetualPositionTable.create(offsettingPerpetualPosition), | ||
PerpetualPositionTable.create({ | ||
...offsettingPerpetualPosition, | ||
subaccountId: SubaccountTable.subaccountIdToUuid(defaultDeleveragingEvent.liquidated!), | ||
}), | ||
]); | ||
|
||
const producerSendMock: jest.SpyInstance = jest.spyOn(producer, 'send'); | ||
await onMessage(kafkaMessage); | ||
|
||
const eventId: Buffer = TendermintEventTable.createEventId( | ||
defaultHeight, | ||
transactionIndex, | ||
eventIndex, | ||
); | ||
|
||
// This size should be in fixed-point notation rather than exponential notation. | ||
const quoteAmount: string = '0.00000000001'; // quote amount is price * fillAmount = 1e3 * 1e-14 = 1e-11 | ||
const totalFilled: string = '0.00000000000001'; // fillAmount in human = 10^4*10^-18=10^-14 | ||
const price: string = '1000'; // 10^9*10^-6=10^3 | ||
const perpetualMarket: PerpetualMarketFromDatabase | undefined = perpetualMarketRefresher | ||
.getPerpetualMarketFromId( | ||
defaultDeleveragingEvent.perpetualId.toString(), | ||
); | ||
|
||
await expectFillInDatabase({ | ||
subaccountId: SubaccountTable.subaccountIdToUuid(defaultDeleveragingEvent.offsetting!), | ||
clientId: '0', | ||
liquidity: Liquidity.MAKER, | ||
size: totalFilled, | ||
price, | ||
quoteAmount, | ||
eventId, | ||
transactionHash: defaultTxHash, | ||
createdAt: defaultDateTime.toISO(), | ||
createdAtHeight: defaultHeight, | ||
type: FillType.OFFSETTING, | ||
clobPairId: perpetualMarket!.clobPairId, | ||
side: OrderSide.BUY, | ||
orderFlags: '0', | ||
clientMetadata: null, | ||
hasOrderId: false, | ||
fee: '0', | ||
}); | ||
await expectFillInDatabase({ | ||
subaccountId: SubaccountTable.subaccountIdToUuid(defaultDeleveragingEvent.liquidated!), | ||
clientId: '0', | ||
liquidity: Liquidity.TAKER, | ||
size: totalFilled, | ||
price, | ||
quoteAmount, | ||
eventId, | ||
transactionHash: defaultTxHash, | ||
createdAt: defaultDateTime.toISO(), | ||
createdAtHeight: defaultHeight, | ||
type: FillType.DELEVERAGED, | ||
clobPairId: perpetualMarket!.clobPairId, | ||
side: OrderSide.SELL, | ||
orderFlags: '0', | ||
clientMetadata: null, | ||
hasOrderId: false, | ||
fee: '0', | ||
}); | ||
|
||
await Promise.all([ | ||
expectFillsAndPositionsSubaccountKafkaMessages( | ||
producerSendMock, | ||
eventId, | ||
true, | ||
), | ||
expectFillsAndPositionsSubaccountKafkaMessages( | ||
producerSendMock, | ||
eventId, | ||
false, | ||
), | ||
expectDefaultTradeKafkaMessageFromTakerFillId( | ||
producerSendMock, | ||
eventId, | ||
), | ||
]); | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test case checks that the onMessage
function creates fills and updates perpetual positions correctly. The setup and assertions look correct. However, it would be beneficial to add a comment explaining what this test case is checking and why.
).catch((error: Error) => { | ||
logger.error({ | ||
at: 'deleveragingHandler#handleViaSqlFunction', | ||
message: 'Failed to handle DeleveragingEventV1', | ||
error, | ||
}); | ||
throw error; | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The error handling here could be improved by providing more context in the error message. Consider including the block height, event index, and transaction index in the error message.
- logger.error({
- at: 'deleveragingHandler#handleViaSqlFunction',
- message: 'Failed to handle DeleveragingEventV1',
- error,
- });
+ logger.error({
+ at: 'deleveragingHandler#handleViaSqlFunction',
+ message: `Failed to handle DeleveragingEventV1 at block height ${this.block.height}, event index ${this.indexerTendermintEvent.eventIndex}, transaction index ${transactionIndex}`,
+ error,
+ });
Commitable suggestion
[!IMPORTANT]
Carefully review the code before committing. Make sure it correctly replaces the highlighted code, has no missing lines and indentaion issues.
).catch((error: Error) => { | |
logger.error({ | |
at: 'deleveragingHandler#handleViaSqlFunction', | |
message: 'Failed to handle DeleveragingEventV1', | |
error, | |
}); | |
throw error; | |
}); | |
).catch((error: Error) => { | |
logger.error({ | |
at: 'deleveragingHandler#handleViaSqlFunction', | |
message: `Failed to handle DeleveragingEventV1 at block height ${this.block.height}, event index ${this.indexerTendermintEvent.eventIndex}, transaction index ${transactionIndex}`, | |
error, | |
}); | |
throw error; | |
}); |
indexer/packages/postgres/src/db/migrations/migration_files/20220727134954_create_fills.ts
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 0
Configuration used: CodeRabbit UI
Files selected for processing (1)
- indexer/services/ender/src/helpers/postgres/postgres-functions.ts (2 hunks)
Files skipped from review due to trivial changes (1)
- indexer/services/ender/src/helpers/postgres/postgres-functions.ts
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I went through the SQL and comments and just had some minor points.
indexer/services/ender/src/handlers/order-fills/abstract-order-fill-handler.ts
Outdated
Show resolved
Hide resolved
indexer/services/ender/src/scripts/dydx_deleveraging_handler.sql
Outdated
Show resolved
Hide resolved
indexer/services/ender/src/scripts/dydx_update_perpetual_position.sql
Outdated
Show resolved
Hide resolved
indexer/services/ender/src/scripts/dydx_update_perpetual_position.sql
Outdated
Show resolved
Hide resolved
indexer/services/ender/src/scripts/dydx_deleveraging_handler.sql
Outdated
Show resolved
Hide resolved
indexer/services/ender/src/scripts/dydx_deleveraging_handler.sql
Outdated
Show resolved
Hide resolved
indexer/services/ender/src/scripts/dydx_deleveraging_handler.sql
Outdated
Show resolved
Hide resolved
indexer/services/ender/src/scripts/dydx_update_perpetual_position.sql
Outdated
Show resolved
Hide resolved
indexer/services/ender/src/scripts/dydx_deleveraging_handler.sql
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 4
Configuration used: CodeRabbit UI
Files selected for processing (5)
- indexer/services/ender/src/handlers/deleveraging-handler.ts (1 hunks)
- indexer/services/ender/src/handlers/order-fills/abstract-order-fill-handler.ts (2 hunks)
- indexer/services/ender/src/helpers/postgres/postgres-functions.ts (2 hunks)
- indexer/services/ender/src/scripts/dydx_deleveraging_handler.sql (1 hunks)
- indexer/services/ender/src/scripts/dydx_update_perpetual_position.sql (1 hunks)
Files skipped from review due to trivial changes (1)
- indexer/services/ender/src/handlers/order-fills/abstract-order-fill-handler.ts
Additional comments: 18
indexer/services/ender/src/helpers/postgres/postgres-functions.ts (2)
31-37: The addition of 'dydx_deleveraging_handler.sql' indicates a new database operation related to handling deleveraging. Ensure that the SQL script is correctly implemented and tested.
58-64: The addition of 'dydx_update_perpetual_position.sql' indicates a new database operation or modification to existing ones related to updating perpetual positions. Ensure that the SQL script is correctly implemented and tested.
indexer/services/ender/src/scripts/dydx_deleveraging_handler.sql (4)
22-23: The constants
QUOTE_CURRENCY_ATOMIC_RESOLUTION
andFEE
are still hard-coded. If these values are subject to change, consider moving them to a configuration file or database.54-57: The calculations for
size
andprice
are still complex and used in multiple places. Consider extracting these into their own SQL functions for better readability and maintainability.66-104: The insertion of
liquidated_fill_record
andoffsetting_fill_record
into thefills
table is still almost identical. Consider creating a function to handle this operation to reduce code duplication.39-48: The function assumes that there will be only one record in the
perpetual_markets
table with a givenperpetualId
. If this assumption is not correct, the function may fail or return incorrect results. Please verify this assumption.indexer/services/ender/src/scripts/dydx_update_perpetual_position.sql (8)
1-9: The function documentation is clear and provides a good overview of the function's purpose, parameters, and return value.
10-16: The function signature is well-defined with appropriate parameter types.
24-30: The query to retrieve the latest perpetual position record is correct. It uses the appropriate WHERE clause to filter by
subaccountId
andperpetualId
, and orders bycreatedAtHeight
in descending order to get the latest record.32-34: The function correctly raises an exception if no perpetual position record is found for the given
subaccountId
andperpetualId
.36-39: The local variables
sum_open
,entry_price
,sum_close
, andexit_price
are correctly initialized with values from the retrieved perpetual position record.41-56: The function correctly updates the perpetual position record based on the side of the fill. It uses the
dydx_perpetual_position_and_order_side_matching
function to determine if the side of the fill matches the side of the perpetual position, and updates thesumOpen
andentryPrice
orsumClose
andexitPrice
accordingly.58-65: The function correctly updates the perpetual position record in the database using the updated values of
sumOpen
,entryPrice
,sumClose
, andexitPrice
.67-68: The function correctly returns the updated perpetual position record.
indexer/services/ender/src/handlers/deleveraging-handler.ts (4)
28-56: The
getParallelizationIds
method now returns an array of strings. Ensure that all calls to this function throughout the codebase have been updated to handle the new return type.58-81: The
generateConsolidatedKafkaEvent
method has been added. This method generates a consolidated Kafka event from the provided parameters. Ensure that this method is being called correctly and that the returned event is being handled properly.83-103: The
generateTradeKafkaEventFromDeleveraging
method has been added. This method generates a trade Kafka event from a deleveraging event. Ensure that this method is being called correctly and that the returned event is being handled properly.105-159: The
internalHandle
method has been significantly modified to include new functionality and database operations. Ensure that all calls to this method throughout the codebase have been updated to match the new signature and that the new functionality is working as expected.
TODO(IND-238): Extract out calculation of quantums and subticks to their own SQL functions. | ||
*/ | ||
size = dydx_trim_scale(dydx_from_jsonlib_long(event_data->'fillAmount') * | ||
power(10, perpetual_market_record."atomicResolution")::numeric); | ||
price = dydx_trim_scale(dydx_from_jsonlib_long(event_data->'price') * | ||
power(10, QUOTE_CURRENCY_ATOMIC_RESOLUTION)::numeric); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The calculations for size
and price
are complex and used in multiple places. Consider extracting these into their own SQL functions for better readability and maintainability.
INSERT INTO fills | ||
("id", "subaccountId", "side", "liquidity", "type", "clobPairId", "size", "price", "quoteAmount", | ||
"eventId", "transactionHash", "createdAt", "createdAtHeight", "fee") | ||
VALUES (dydx_uuid_from_fill_event_parts(event_id, 'MAKER'), | ||
offsetting_subaccount_uuid, | ||
offsetting_side, | ||
'MAKER', | ||
'OFFSETTING', | ||
clob_pair_id, | ||
size, | ||
price, | ||
dydx_trim_scale(size * price), | ||
event_id, | ||
transaction_hash, | ||
block_time, | ||
block_height, | ||
FEE) | ||
RETURNING * INTO offsetting_fill_record; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The insertion of liquidated_fill_record
and offsetting_fill_record
into the fills
table is almost identical. Consider creating a function to handle this operation to reduce code duplication.
liquidated_perpetual_position_record = dydx_update_perpetual_position( | ||
liquidated_subaccount_uuid, | ||
perpetual_id, | ||
liquidated_side, | ||
size, | ||
price); | ||
offsetting_perpetual_position_record = dydx_update_perpetual_position( | ||
offsetting_subaccount_uuid, | ||
perpetual_id, | ||
offsetting_side, | ||
size, | ||
price); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The function dydx_update_perpetual_position
is called twice with almost identical parameters. If the function is expensive to call, consider optimizing this part of the code.
`SELECT dydx_deleveraging_handler( | ||
${this.block.height}, | ||
'${this.block.time?.toISOString()}', | ||
'${JSON.stringify(DeleveragingEventV1.decode(eventDataBinary))}', | ||
${this.indexerTendermintEvent.eventIndex}, | ||
${transactionIndex}, | ||
'${this.block.txHashes[transactionIndex]}' | ||
) AS result;`, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The SQL query string is being constructed manually which can lead to SQL injection vulnerabilities. Consider using parameterized queries or prepared statements to mitigate this risk.
- `SELECT dydx_deleveraging_handler(
- ${this.block.height},
- '${this.block.time?.toISOString()}',
- '${JSON.stringify(DeleveragingEventV1.decode(eventDataBinary))}',
- ${this.indexerTendermintEvent.eventIndex},
- ${transactionIndex},
- '${this.block.txHashes[transactionIndex]}'
- ) AS result;`,
+ `SELECT dydx_deleveraging_handler(
+ $1,
+ $2,
+ $3,
+ $4,
+ $5,
+ $6
+ ) AS result;`,
+ [this.block.height, this.block.time?.toISOString(), JSON.stringify(DeleveragingEventV1.decode(eventDataBinary)), this.indexerTendermintEvent.eventIndex, transactionIndex, this.block.txHashes[transactionIndex]]
Commitable suggestion
[!IMPORTANT]
Carefully review the code before committing. Make sure it correctly replaces the highlighted code, has no missing lines and indentaion issues.
`SELECT dydx_deleveraging_handler( | |
${this.block.height}, | |
'${this.block.time?.toISOString()}', | |
'${JSON.stringify(DeleveragingEventV1.decode(eventDataBinary))}', | |
${this.indexerTendermintEvent.eventIndex}, | |
${transactionIndex}, | |
'${this.block.txHashes[transactionIndex]}' | |
) AS result;`, | |
`SELECT dydx_deleveraging_handler( | |
$1, | |
$2, | |
$3, | |
$4, | |
$5, | |
$6 | |
) AS result;`, | |
[this.block.height, this.block.time?.toISOString(), JSON.stringify(DeleveragingEventV1.decode(eventDataBinary)), this.indexerTendermintEvent.eventIndex, transactionIndex, this.block.txHashes[transactionIndex]] |
liquidation?: boolean, | ||
deleveraging?: boolean, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are these optional?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
made them required
// To ensure that StatefulOrderEvents and OrderFillEvents for the same order are not | ||
// processed in parallel | ||
`${DELEVERAGING_EVENT_TYPE}_${offsettingSubaccountUuid}`, | ||
`${DELEVERAGING_EVENT_TYPE}_${deleveragedSubaccountUuid}`, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Im not sure we need these, isn't the conditional already covered by lines 49 and 50?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah they're not needed. But I think it's clearer to have DELEVERAGING_EVENT_TYPE be included.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think they should be removed, they're already covered by the ${SUBACCOUNT_ORDER_FILL_EVENT_TYPE}_${offsettingSubaccountUuid}
. I would update all the comments to say To ensure that StatefulOrderEvents, OrderFillEvents, and DeleveragingEvents for the same order...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
indexer/services/ender/src/scripts/dydx_from_protocol_is_buy.sql
Outdated
Show resolved
Hide resolved
@@ -0,0 +1,61 @@ | |||
CREATE OR REPLACE FUNCTION dydx_update_perpetual_position( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we update this function name to `update_perpetual_position_aggregate_fields
-- Retrieve the latest perpetual position record | ||
SELECT * INTO perpetual_position_record | ||
FROM perpetual_positions | ||
WHERE "subaccountId" = subaccount_uuid | ||
AND "perpetualId" = perpetual_id | ||
ORDER BY "createdAtHeight" DESC | ||
LIMIT 1; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You wouldn't have known about this, but I'm pushing a PR right now to order by openEventId
instead of createdAtHeight
, its a bug that we just uncovered today and track here. Please order by openEventId
. https://linear.app/dydx/issue/IND-485/inconsistent-perpetual-position-querying
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
going to address it in a separate PR since it breaks order handler unit tests/it's not part of deleveraging.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 7
Configuration used: CodeRabbit UI
Files selected for processing (9)
- indexer/services/comlink/public/websocket-documentation.md (1 hunks)
- indexer/services/ender/tests/handlers/order-fills/deleveraging-handler.test.ts (1 hunks)
- indexer/services/ender/tests/lib/on-message.test.ts (1 hunks)
- indexer/services/ender/src/handlers/order-fills/deleveraging-handler.ts (1 hunks)
- indexer/services/ender/src/helpers/postgres/postgres-functions.ts (2 hunks)
- indexer/services/ender/src/scripts/dydx_deleveraging_handler.sql (1 hunks)
- indexer/services/ender/src/scripts/dydx_order_fill_handler_per_order.sql (1 hunks)
- indexer/services/ender/src/scripts/dydx_update_perpetual_position_aggregate_fields.sql (1 hunks)
- indexer/services/ender/src/validators/deleveraging-validator.ts (1 hunks)
Files skipped from review due to trivial changes (3)
- indexer/services/comlink/public/websocket-documentation.md
- indexer/services/ender/tests/lib/on-message.test.ts
- indexer/services/ender/src/validators/deleveraging-validator.ts
Additional comments: 5
indexer/services/ender/src/scripts/dydx_order_fill_handler_per_order.sql (1)
- 165-173: The change to use
dydx_update_perpetual_position_aggregate_fields
function for updating the perpetual position record is a good one. It improves readability and maintainability by consolidating the logic into a single function. However, ensure that the functiondydx_update_perpetual_position_aggregate_fields
handles all edge cases and error conditions that were previously handled in this code block.indexer/services/ender/__tests__/handlers/order-fills/deleveraging-handler.test.ts (1)
- 1-323: The test file seems to be well-structured and covers various scenarios including validation checks, creation of fills, and updates to perpetual positions. However, it's important to ensure that all edge cases and error scenarios are covered in the tests. For instance, you might want to add tests for scenarios where the database operations fail or the Kafka producer encounters an error. This will help ensure the robustness of your code.
indexer/services/ender/src/helpers/postgres/postgres-functions.ts (2)
31-37: The addition of 'dydx_deleveraging_handler.sql' to the scripts array indicates that a new SQL script is being loaded and executed. Ensure that this script is correctly implemented and tested.
58-64: The addition of 'dydx_update_perpetual_position_aggregate_fields.sql' to the scripts array indicates that a new SQL script is being loaded and executed. Ensure that this script is correctly implemented and tested.
indexer/services/ender/src/handlers/order-fills/deleveraging-handler.ts (1)
- 70-77: Error handling is done correctly here. The error is logged and then re-thrown to be handled by the caller. This is a good practice as it allows for centralized error handling while still providing detailed error logs.
QUOTE_CURRENCY_ATOMIC_RESOLUTION constant numeric = -6; | ||
FEE constant numeric = 0; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The constants QUOTE_CURRENCY_ATOMIC_RESOLUTION
and FEE
are hard-coded. If these values are subject to change, consider moving them to a configuration file or database.
size = dydx_trim_scale(dydx_from_jsonlib_long(event_data->'fillAmount') * | ||
power(10, perpetual_market_record."atomicResolution")::numeric); | ||
price = dydx_trim_scale(dydx_from_jsonlib_long(event_data->'price') * | ||
power(10, QUOTE_CURRENCY_ATOMIC_RESOLUTION)::numeric); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The calculations for size
and price
are complex and used in multiple places. Consider extracting these into their own SQL functions for better readability and maintainability.
event_id = dydx_event_id_from_parts( | ||
block_height, transaction_index, event_index); | ||
INSERT INTO fills | ||
("id", "subaccountId", "side", "liquidity", "type", "clobPairId", "size", "price", "quoteAmount", | ||
"eventId", "transactionHash", "createdAt", "createdAtHeight", "fee") | ||
VALUES (dydx_uuid_from_fill_event_parts(event_id, 'TAKER'), | ||
liquidated_subaccount_uuid, | ||
liquidated_side, | ||
'TAKER', | ||
'DELEVERAGED', | ||
clob_pair_id, | ||
size, | ||
price, | ||
dydx_trim_scale(size * price), | ||
event_id, | ||
transaction_hash, | ||
block_time, | ||
block_height, | ||
FEE) | ||
RETURNING * INTO liquidated_fill_record; | ||
|
||
INSERT INTO fills | ||
("id", "subaccountId", "side", "liquidity", "type", "clobPairId", "size", "price", "quoteAmount", | ||
"eventId", "transactionHash", "createdAt", "createdAtHeight", "fee") | ||
VALUES (dydx_uuid_from_fill_event_parts(event_id, 'MAKER'), | ||
offsetting_subaccount_uuid, | ||
offsetting_side, | ||
'MAKER', | ||
'OFFSETTING', | ||
clob_pair_id, | ||
size, | ||
price, | ||
dydx_trim_scale(size * price), | ||
event_id, | ||
transaction_hash, | ||
block_time, | ||
block_height, | ||
FEE) | ||
RETURNING * INTO offsetting_fill_record; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The insertion of liquidated_fill_record
and offsetting_fill_record
into the fills
table is almost identical. Consider creating a function to handle this operation to reduce code duplication.
public getParallelizationIds(): string[] { | ||
const perpetualMarket: PerpetualMarketFromDatabase | undefined = perpetualMarketRefresher | ||
.getPerpetualMarketFromId(this.event.perpetualId.toString()); | ||
if (perpetualMarket === undefined) { | ||
logger.error({ | ||
at: 'DeleveragingHandler#internalHandle', | ||
message: 'Unable to find perpetual market', | ||
perpetualId: this.event.perpetualId, | ||
event: this.event, | ||
}); | ||
throw new Error(`Unable to find perpetual market with perpetualId: ${this.event.perpetualId}`); | ||
} | ||
const offsettingSubaccountUuid: string = SubaccountTable | ||
.uuid(this.event.offsetting!.owner, this.event.offsetting!.number); | ||
const deleveragedSubaccountUuid: string = SubaccountTable | ||
.uuid(this.event.liquidated!.owner, this.event.liquidated!.number); | ||
return [ | ||
`${this.eventType}_${offsettingSubaccountUuid}_${perpetualMarket.clobPairId}`, | ||
`${this.eventType}_${deleveragedSubaccountUuid}_${perpetualMarket.clobPairId}`, | ||
// To ensure that SubaccountUpdateEvents and OrderFillEvents for the same subaccount are not | ||
// processed in parallel | ||
`${SUBACCOUNT_ORDER_FILL_EVENT_TYPE}_${offsettingSubaccountUuid}`, | ||
`${SUBACCOUNT_ORDER_FILL_EVENT_TYPE}_${deleveragedSubaccountUuid}`, | ||
// To ensure that StatefulOrderEvents and OrderFillEvents for the same order are not | ||
// processed in parallel | ||
`${DELEVERAGING_EVENT_TYPE}_${offsettingSubaccountUuid}`, | ||
`${DELEVERAGING_EVENT_TYPE}_${deleveragedSubaccountUuid}`, | ||
]; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The getParallelizationIds
method is used to ensure that events related to the same subaccount or order are not processed in parallel. This is a good practice to avoid race conditions. However, it assumes that this.event.offsetting
and this.event.liquidated
are not null. It would be safer to add null checks for these properties.
/** | ||
Parameters: | ||
- subaccount_uuid: The subaccount uuid of the updated perpetual position. | ||
- perpetual_id: The perpetual id of the updated perpetual position. | ||
- side: The side of the fill. | ||
- size: The size of the fill. | ||
- price: The price of the fill. | ||
Returns: the updated perpetual position. | ||
*/ | ||
CREATE OR REPLACE FUNCTION dydx_update_perpetual_position_aggregate_fields( | ||
subaccount_uuid uuid, | ||
perpetual_id bigint, | ||
side text, | ||
size numeric, | ||
price numeric | ||
) RETURNS perpetual_positions AS $$ | ||
DECLARE | ||
perpetual_position_record RECORD; | ||
sum_open numeric; | ||
entry_price numeric; | ||
sum_close numeric; | ||
exit_price numeric; | ||
BEGIN | ||
-- Retrieve the latest perpetual position record. | ||
-- TODO(IND-485): Order by openEventId instead of createdAtHeight. | ||
SELECT * INTO perpetual_position_record | ||
FROM perpetual_positions | ||
WHERE "subaccountId" = subaccount_uuid | ||
AND "perpetualId" = perpetual_id | ||
ORDER BY "createdAtHeight" DESC | ||
LIMIT 1; | ||
|
||
IF NOT FOUND THEN | ||
RAISE EXCEPTION 'Unable to find existing perpetual position, subaccountId: %, perpetualId: %', subaccount_uuid, perpetual_id; | ||
END IF; | ||
|
||
sum_open = perpetual_position_record."sumOpen"; | ||
entry_price = perpetual_position_record."entryPrice"; | ||
sum_close = perpetual_position_record."sumClose"; | ||
exit_price = perpetual_position_record."exitPrice"; | ||
|
||
-- Update the perpetual position record based on the side | ||
IF dydx_perpetual_position_and_order_side_matching(perpetual_position_record."side", side) THEN | ||
sum_open := dydx_trim_scale(perpetual_position_record."sumOpen" + size); | ||
entry_price := dydx_get_weighted_average( | ||
perpetual_position_record."entryPrice", perpetual_position_record."sumOpen", price, size | ||
); | ||
perpetual_position_record."sumOpen" = sum_open; | ||
perpetual_position_record."entryPrice" = entry_price; | ||
ELSE | ||
sum_close := dydx_trim_scale(perpetual_position_record."sumClose" + size); | ||
exit_price := dydx_get_weighted_average( | ||
perpetual_position_record."exitPrice", perpetual_position_record."sumClose", price, size | ||
); | ||
perpetual_position_record."sumClose" = sum_close; | ||
perpetual_position_record."exitPrice" = exit_price; | ||
END IF; | ||
|
||
-- Perform the actual update in the database | ||
UPDATE perpetual_positions | ||
SET | ||
"sumOpen" = sum_open, | ||
"entryPrice" = entry_price, | ||
"sumClose" = sum_close, | ||
"exitPrice" = exit_price | ||
WHERE "id" = perpetual_position_record."id"; | ||
|
||
-- Return the updated perpetual position record | ||
RETURN perpetual_position_record; | ||
END; | ||
$$ LANGUAGE plpgsql; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The function dydx_update_perpetual_position_aggregate_fields
seems to be well-structured and follows good practices for SQL functions. It includes error handling for when the existing perpetual position is not found, which is a good practice. However, there are a few points to consider:
- The function uses a
SELECT *
statement (line 26). This can lead to performance issues if the table has many columns. Consider specifying the columns you need. - The function does not handle the case where the
dydx_perpetual_position_and_order_side_matching
ordydx_get_weighted_average
functions fail or return unexpected results. Consider adding error handling for these cases. - The function does not handle potential concurrency issues. If two processes call this function at the same time for the same
subaccount_uuid
andperpetual_id
, they might both get the same record and perform updates, leading to inconsistent data. Consider using a transaction or a lock to prevent this. - The function does not handle potential division by zero errors in the
dydx_get_weighted_average
function. Ensure that this function can handle cases where the sum of the open or close positions is zero.
Here is a potential improvement for the SELECT *
statement:
- SELECT * INTO perpetual_position_record
+ SELECT "id", "sumOpen", "entryPrice", "sumClose", "exitPrice" INTO perpetual_position_record
Commitable suggestion
[!IMPORTANT]
Carefully review the code before committing. Make sure it correctly replaces the highlighted code, has no missing lines and indentaion issues.
/** | |
Parameters: | |
- subaccount_uuid: The subaccount uuid of the updated perpetual position. | |
- perpetual_id: The perpetual id of the updated perpetual position. | |
- side: The side of the fill. | |
- size: The size of the fill. | |
- price: The price of the fill. | |
Returns: the updated perpetual position. | |
*/ | |
CREATE OR REPLACE FUNCTION dydx_update_perpetual_position_aggregate_fields( | |
subaccount_uuid uuid, | |
perpetual_id bigint, | |
side text, | |
size numeric, | |
price numeric | |
) RETURNS perpetual_positions AS $$ | |
DECLARE | |
perpetual_position_record RECORD; | |
sum_open numeric; | |
entry_price numeric; | |
sum_close numeric; | |
exit_price numeric; | |
BEGIN | |
-- Retrieve the latest perpetual position record. | |
-- TODO(IND-485): Order by openEventId instead of createdAtHeight. | |
SELECT * INTO perpetual_position_record | |
FROM perpetual_positions | |
WHERE "subaccountId" = subaccount_uuid | |
AND "perpetualId" = perpetual_id | |
ORDER BY "createdAtHeight" DESC | |
LIMIT 1; | |
IF NOT FOUND THEN | |
RAISE EXCEPTION 'Unable to find existing perpetual position, subaccountId: %, perpetualId: %', subaccount_uuid, perpetual_id; | |
END IF; | |
sum_open = perpetual_position_record."sumOpen"; | |
entry_price = perpetual_position_record."entryPrice"; | |
sum_close = perpetual_position_record."sumClose"; | |
exit_price = perpetual_position_record."exitPrice"; | |
-- Update the perpetual position record based on the side | |
IF dydx_perpetual_position_and_order_side_matching(perpetual_position_record."side", side) THEN | |
sum_open := dydx_trim_scale(perpetual_position_record."sumOpen" + size); | |
entry_price := dydx_get_weighted_average( | |
perpetual_position_record."entryPrice", perpetual_position_record."sumOpen", price, size | |
); | |
perpetual_position_record."sumOpen" = sum_open; | |
perpetual_position_record."entryPrice" = entry_price; | |
ELSE | |
sum_close := dydx_trim_scale(perpetual_position_record."sumClose" + size); | |
exit_price := dydx_get_weighted_average( | |
perpetual_position_record."exitPrice", perpetual_position_record."sumClose", price, size | |
); | |
perpetual_position_record."sumClose" = sum_close; | |
perpetual_position_record."exitPrice" = exit_price; | |
END IF; | |
-- Perform the actual update in the database | |
UPDATE perpetual_positions | |
SET | |
"sumOpen" = sum_open, | |
"entryPrice" = entry_price, | |
"sumClose" = sum_close, | |
"exitPrice" = exit_price | |
WHERE "id" = perpetual_position_record."id"; | |
-- Return the updated perpetual position record | |
RETURN perpetual_position_record; | |
END; | |
$$ LANGUAGE plpgsql; | |
/** | |
Parameters: | |
- subaccount_uuid: The subaccount uuid of the updated perpetual position. | |
- perpetual_id: The perpetual id of the updated perpetual position. | |
- side: The side of the fill. | |
- size: The size of the fill. | |
- price: The price of the fill. | |
Returns: the updated perpetual position. | |
*/ | |
CREATE OR REPLACE FUNCTION dydx_update_perpetual_position_aggregate_fields( | |
subaccount_uuid uuid, | |
perpetual_id bigint, | |
side text, | |
size numeric, | |
price numeric | |
) RETURNS perpetual_positions AS $$ | |
DECLARE | |
perpetual_position_record RECORD; | |
sum_open numeric; | |
entry_price numeric; | |
sum_close numeric; | |
exit_price numeric; | |
BEGIN | |
-- Retrieve the latest perpetual position record. | |
-- TODO(IND-485): Order by openEventId instead of createdAtHeight. | |
SELECT "id", "sumOpen", "entryPrice", "sumClose", "exitPrice" INTO perpetual_position_record | |
FROM perpetual_positions | |
WHERE "subaccountId" = subaccount_uuid | |
AND "perpetualId" = perpetual_id | |
ORDER BY "createdAtHeight" DESC | |
LIMIT 1; | |
IF NOT FOUND THEN | |
RAISE EXCEPTION 'Unable to find existing perpetual position, subaccountId: %, perpetualId: %', subaccount_uuid, perpetual_id; | |
END IF; | |
sum_open = perpetual_position_record."sumOpen"; | |
entry_price = perpetual_position_record."entryPrice"; | |
sum_close = perpetual_position_record."sumClose"; | |
exit_price = perpetual_position_record."exitPrice"; | |
-- Update the perpetual position record based on the side | |
IF dydx_perpetual_position_and_order_side_matching(perpetual_position_record."side", side) THEN | |
sum_open := dydx_trim_scale(perpetual_position_record."sumOpen" + size); | |
entry_price := dydx_get_weighted_average( | |
perpetual_position_record."entryPrice", perpetual_position_record."sumOpen", price, size | |
); | |
perpetual_position_record."sumOpen" = sum_open; | |
perpetual_position_record."entryPrice" = entry_price; | |
ELSE | |
sum_close := dydx_trim_scale(perpetual_position_record."sumClose" + size); | |
exit_price := dydx_get_weighted_average( | |
perpetual_position_record."exitPrice", perpetual_position_record."sumClose", price, size | |
); | |
perpetual_position_record."sumClose" = sum_close; | |
perpetual_position_record."exitPrice" = exit_price; | |
END IF; | |
-- Perform the actual update in the database | |
UPDATE perpetual_positions | |
SET | |
"sumOpen" = sum_open, | |
"entryPrice" = entry_price, | |
"sumClose" = sum_close, | |
"exitPrice" = exit_price | |
WHERE "id" = perpetual_position_record."id"; | |
-- Return the updated perpetual position record | |
RETURN perpetual_position_record; | |
END; | |
$$ LANGUAGE plpgsql; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 0
Configuration used: CodeRabbit UI
Files selected for processing (3)
- indexer/packages/postgres/src/types/websocket-message-types.ts (1 hunks)
- indexer/services/ender/tests/helpers/constants.ts (1 hunks)
- indexer/services/ender/tests/lib/kafka-publisher.test.ts (3 hunks)
Files skipped from review due to trivial changes (1)
- indexer/services/ender/tests/helpers/constants.ts
Additional comments: 4
indexer/services/ender/__tests__/lib/kafka-publisher.test.ts (3)
414-418: The
deleveraging
property has been added to thetradeContent1
object. Ensure that this property is being correctly set and used elsewhere in the codebase.428-432: The
deleveraging
property has been added to thetradeContent2
object. Ensure that this property is being correctly set and used elsewhere in the codebase.443-447: The
deleveraging
property has been added to thetradeContent3
object. Ensure that this property is being correctly set and used elsewhere in the codebase.indexer/packages/postgres/src/types/websocket-message-types.ts (1)
- 176-180: The new
deleveraging
property has been added to theTradeContent
interface. Ensure that all instances whereTradeContent
is used have been updated to handle this new property.side: string, createdAt: IsoString, liquidation: boolean, + deleveraging: boolean, }
...stgres/src/db/migrations/migration_files/20231101173444_update_fills_type_with_deleverage.ts
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 0
Configuration used: CodeRabbit UI
Files selected for processing (1)
- indexer/packages/postgres/src/db/migrations/migration_files/20231101173444_update_fills_type_with_deleverage.ts (1 hunks)
Additional comments: 1
indexer/packages/postgres/src/db/migrations/migration_files/20231101173444_update_fills_type_with_deleverage.ts (1)
- 1-19: The migration script looks good. It correctly adds and removes the 'DELEVERAGED' and 'OFFSETTING' enum values in the
up
anddown
functions respectively. Ensure that theformatAlterTableEnumSql
function handles the addition and removal of enum values correctly.
indexer/services/ender/__tests__/handlers/order-fills/deleveraging-handler.test.ts
Outdated
Show resolved
Hide resolved
indexer/services/ender/__tests__/handlers/order-fills/deleveraging-handler.test.ts
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 3
Configuration used: CodeRabbit UI
Files selected for processing (3)
- indexer/services/ender/tests/handlers/order-fills/deleveraging-handler.test.ts (1 hunks)
- indexer/services/ender/src/constants.ts (1 hunks)
- indexer/services/ender/src/handlers/order-fills/deleveraging-handler.ts (1 hunks)
Additional comments: 3
indexer/services/ender/src/constants.ts (1)
- 13-15: The removal of the
DELEVERAGING_EVENT_TYPE
constant is noted. Ensure that this does not affect any other parts of the codebase that might be using this constant. If there are any, they should be updated accordingly.indexer/services/ender/__tests__/handlers/order-fills/deleveraging-handler.test.ts (1)
- 1-346: The entire file seems to be well-structured and follows good practices. The tests are well-written and cover the main functionalities of the
DeleveragingHandler
class. The use of helper functions and constants makes the tests more readable and maintainable. The use of mocks and spies ensures that the tests are isolated and do not have side effects. The use ofbeforeAll
,beforeEach
,afterEach
, andafterAll
hooks ensures that the necessary setup and cleanup are done for each test. The use ofexpect
assertions ensures that the tests are checking the correct conditions.indexer/services/ender/src/handlers/order-fills/deleveraging-handler.ts (1)
- 24-48: The
getParallelizationIds
method retrieves thePerpetualMarketFromDatabase
object using theperpetualId
from the event. If thePerpetualMarketFromDatabase
object is not found, it logs an error and throws an exception. This is a good practice as it ensures that the program does not proceed with invalid data. However, ensure that this exception is properly handled upstream to prevent the program from crashing.
}, | ||
), | ||
]); | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment from vincentwschau suggests checking if positions have changed as expected. This is a valid point and should be considered. You could add assertions to check if the positions have been updated correctly after the deleveraging event.
+ expect(offsettingPerpetualPosition.size).toBe('expected value');
+ expect(deleveragedPerpetualPosition.size).toBe('expected value');
Commitable suggestion
[!IMPORTANT]
Carefully review the code before committing. Make sure it correctly replaces the highlighted code, has no missing lines and indentaion issues.
expect(offsettingPerpetualPosition.size).toBe('expected value'); | |
expect(deleveragedPerpetualPosition.size).toBe('expected value'); |
// eslint-disable-next-line @typescript-eslint/require-await | ||
public async internalHandle(): Promise<ConsolidatedKafkaEvent[]> { | ||
const eventDataBinary: Uint8Array = this.indexerTendermintEvent.dataBytes; | ||
const transactionIndex: number = indexerTendermintEventToTransactionIndex( | ||
this.indexerTendermintEvent, | ||
); | ||
const result: pg.QueryResult = await storeHelpers.rawQuery( | ||
`SELECT dydx_deleveraging_handler( | ||
${this.block.height}, | ||
'${this.block.time?.toISOString()}', | ||
'${JSON.stringify(DeleveragingEventV1.decode(eventDataBinary))}', | ||
${this.indexerTendermintEvent.eventIndex}, | ||
${transactionIndex}, | ||
'${this.block.txHashes[transactionIndex]}' | ||
) AS result;`, | ||
{ txId: this.txId }, | ||
).catch((error: Error) => { | ||
logger.error({ | ||
at: 'DeleveragingHandler#handleViaSqlFunction', | ||
message: 'Failed to handle DeleveragingEventV1', | ||
error, | ||
}); | ||
throw error; | ||
}); | ||
const liquidatedFill: FillFromDatabase = FillModel.fromJson( | ||
result.rows[0].result.liquidated_fill) as FillFromDatabase; | ||
const offsettingFill: FillFromDatabase = FillModel.fromJson( | ||
result.rows[0].result.offsetting_fill) as FillFromDatabase; | ||
const perpetualMarket: PerpetualMarketFromDatabase = PerpetualMarketModel.fromJson( | ||
result.rows[0].result.perpetual_market) as PerpetualMarketFromDatabase; | ||
const liquidatedPerpetualPosition: | ||
PerpetualPositionFromDatabase = PerpetualPositionModel.fromJson( | ||
result.rows[0].result.liquidated_perpetual_position) as PerpetualPositionFromDatabase; | ||
const offsettingPerpetualPosition: | ||
PerpetualPositionFromDatabase = PerpetualPositionModel.fromJson( | ||
result.rows[0].result.offsetting_perpetual_position) as PerpetualPositionFromDatabase; | ||
const kafkaEvents: ConsolidatedKafkaEvent[] = [ | ||
this.generateConsolidatedKafkaEvent( | ||
this.event.liquidated!, | ||
undefined, | ||
liquidatedPerpetualPosition, | ||
liquidatedFill, | ||
perpetualMarket, | ||
), | ||
this.generateConsolidatedKafkaEvent( | ||
this.event.offsetting!, | ||
undefined, | ||
offsettingPerpetualPosition, | ||
offsettingFill, | ||
perpetualMarket, | ||
), | ||
this.generateTradeKafkaEventFromTakerOrderFill( | ||
liquidatedFill, | ||
), | ||
]; | ||
return kafkaEvents; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The internalHandle
method processes the DeleveragingEventV1
and generates the corresponding Kafka events. It uses a raw SQL query to call the dydx_deleveraging_handler
function in the database. This could potentially lead to SQL injection if not handled properly. Make sure that all inputs to the SQL query are properly sanitized and parameterized. Also, consider using a prepared statement or stored procedure instead of a raw SQL query for better security and performance.
const liquidatedFill: FillFromDatabase = FillModel.fromJson( | ||
result.rows[0].result.liquidated_fill) as FillFromDatabase; | ||
const offsettingFill: FillFromDatabase = FillModel.fromJson( | ||
result.rows[0].result.offsetting_fill) as FillFromDatabase; | ||
const perpetualMarket: PerpetualMarketFromDatabase = PerpetualMarketModel.fromJson( | ||
result.rows[0].result.perpetual_market) as PerpetualMarketFromDatabase; | ||
const liquidatedPerpetualPosition: | ||
PerpetualPositionFromDatabase = PerpetualPositionModel.fromJson( | ||
result.rows[0].result.liquidated_perpetual_position) as PerpetualPositionFromDatabase; | ||
const offsettingPerpetualPosition: | ||
PerpetualPositionFromDatabase = PerpetualPositionModel.fromJson( | ||
result.rows[0].result.offsetting_perpetual_position) as PerpetualPositionFromDatabase; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The results of the SQL query are being cast to specific types without any type checking. This could potentially lead to runtime errors if the query does not return the expected results. Consider adding type checks or using a type-safe query builder to avoid this issue.
@@ -56,7 +57,7 @@ import { | |||
convertPerpetualPosition, | |||
generateFillSubaccountMessage, | |||
generatePerpetualMarketMessage, | |||
generatePerpetualPositionsContents, | |||
generatePerpetualPositionsContents, isDeleveraging, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
generatePerpetualPositionsContents, isDeleveraging, | |
generatePerpetualPositionsContents, | |
isDeleveraging, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 0
Configuration used: CodeRabbit UI
Files selected for processing (1)
- indexer/services/ender/tests/helpers/indexer-proto-helpers.ts (4 hunks)
Additional comments: 4
indexer/services/ender/__tests__/helpers/indexer-proto-helpers.ts (4)
51-51: The import of
DeleveragingEventV1
is correct and necessary for the new functioncreateKafkaMessageFromDeleveragingEvent
.61-61: The import of
isDeleveraging
is correct and necessary for the updated functionexpectDefaultTradeKafkaMessageFromTakerFillId
.476-509: The new function
createKafkaMessageFromDeleveragingEvent
is correctly implemented. It creates a Kafka message from aDeleveragingEventV1
object. The function takes in adeleveragingEvent
object and other necessary parameters, creates anIndexerTendermintEvent
object, and then creates anIndexerTendermintBlock
object. It then encodes the block into a binary format and creates a Kafka message from it.811-811: The reference to
isDeleveraging
in the functionexpectDefaultTradeKafkaMessageFromTakerFillId
is correct. It adds information about whether the fill is a deleveraging event to the Kafka message.
Changelist
add handler for deleveraging events:
Test Plan
unit tested.
Author/Reviewer Checklist
state-breaking
label.PrepareProposal
orProcessProposal
, manually add the labelproposal-breaking
.feature:[feature-name]
.backport/[branch-name]
.refactor
,chore
,bug
.