Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimize replication throttling in Executor to reduce Kafka admin requests #2214

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -1613,6 +1613,8 @@ private void interBrokerMoveReplicas() throws InterruptedException, ExecutionExc
LOG.info("User task {}: Starting {} inter-broker partition movements.", _uuid, numTotalPartitionMovements);

int partitionsToMove = numTotalPartitionMovements;
Set<Integer> throttledBrokers = new HashSet<>();

// Exhaust all the pending partition movements.
while ((partitionsToMove > 0 || !inExecutionTasks().isEmpty()) && _stopSignal.get() == NO_STOP_EXECUTION) {
// Get tasks to execute.
Expand All @@ -1621,11 +1623,21 @@ private void interBrokerMoveReplicas() throws InterruptedException, ExecutionExc

AlterPartitionReassignmentsResult result = null;
if (!tasksToExecute.isEmpty()) {
throttleHelper.setThrottles(tasksToExecute.stream().map(ExecutionTask::proposal).collect(Collectors.toList()));
// Set throttles only if not already set for brokers involved
List<ExecutionProposal> proposals = tasksToExecute.stream().map(ExecutionTask::proposal).collect(Collectors.toList());
for (ExecutionProposal proposal : proposals) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does it makes more sense to add a map inside throttle helper, so that you don't need to expose the logic in executor. I think the thorttle helper should be responsible to remember which broker is already throttled and just don't send redundant request to those brokers. Same with the clear throttle logic.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will take a look

int brokerId = proposal.oldLeader().brokerId();
if (!throttledBrokers.contains(brokerId)) {
throttleHelper.setThrottles(Collections.singletonList(proposal));
throttledBrokers.add(brokerId);
}
}

// Execute the tasks.
_executionTaskManager.markTasksInProgress(tasksToExecute);
result = ExecutionUtils.submitReplicaReassignmentTasks(_adminClient, tasksToExecute);
}

// Wait indefinitely for partition movements to finish.
List<ExecutionTask> completedTasks = waitForInterBrokerReplicaTasksToFinish(result);
partitionsToMove = _executionTaskManager.numRemainingInterBrokerPartitionMovements();
Expand All @@ -1644,7 +1656,11 @@ private void interBrokerMoveReplicas() throws InterruptedException, ExecutionExc
.collect(Collectors.toList());
inProgressTasks.addAll(inExecutionTasks());

throttleHelper.clearThrottles(completedTasks, inProgressTasks);
// Clear throttles only after all tasks are complete for the brokers involved
if (partitionsToMove == 0) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you may want to check all of these conditions:

(partitionsToMove == 0 && inExecutionTasks().isEmpty()) || (_stopSignal.get() != NO_STOP_EXECUTION)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay 👍

throttleHelper.clearThrottles(completedTasks, inProgressTasks);
throttledBrokers.clear();
}
}

// Currently, _executionProgressCheckIntervalMs is only runtime adjusted for inter broker move tasks, not
Expand All @@ -1663,17 +1679,17 @@ private void interBrokerMoveReplicas() throws InterruptedException, ExecutionExc
+ "{} tasks aborting, {} tasks aborted, {} tasks dead, {} tasks completed, {} remaining data to move; for intra-broker "
+ "partition movement {} tasks cancelled; for leadership movements {} tasks cancelled.",
_uuid,
partitionMovementTasksByState.get(ExecutionTaskState.PENDING),
partitionMovementTasksByState.get(ExecutionTaskState.IN_PROGRESS),
partitionMovementTasksByState.get(ExecutionTaskState.ABORTING),
partitionMovementTasksByState.get(ExecutionTaskState.ABORTED),
partitionMovementTasksByState.get(ExecutionTaskState.DEAD),
partitionMovementTasksByState.get(ExecutionTaskState.COMPLETED),
partitionMovementTasksByState.getOrDefault(ExecutionTaskState.PENDING, 0),
partitionMovementTasksByState.getOrDefault(ExecutionTaskState.IN_PROGRESS, 0),
partitionMovementTasksByState.getOrDefault(ExecutionTaskState.ABORTING, 0),
partitionMovementTasksByState.getOrDefault(ExecutionTaskState.ABORTED, 0),
partitionMovementTasksByState.getOrDefault(ExecutionTaskState.DEAD, 0),
partitionMovementTasksByState.getOrDefault(ExecutionTaskState.COMPLETED, 0),
executionTasksSummary.remainingInterBrokerDataToMoveInMB(),
executionTasksSummary.taskStat().get(INTRA_BROKER_REPLICA_ACTION).get(ExecutionTaskState.PENDING),
executionTasksSummary.taskStat().get(LEADER_ACTION).get(ExecutionTaskState.PENDING));
executionTasksSummary.taskStat().get(INTRA_BROKER_REPLICA_ACTION).getOrDefault(ExecutionTaskState.PENDING, 0),
executionTasksSummary.taskStat().get(LEADER_ACTION).getOrDefault(ExecutionTaskState.PENDING, 0));
}
}
}

private void intraBrokerMoveReplicas() {
int numTotalPartitionMovements = _executionTaskManager.numRemainingIntraBrokerPartitionMovements();
Expand Down