Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
See issue #368 for discussion.
Hangfire.Core 1.8.12 introduces parallelism support for
DelayedJobScheduler
andRecurringJobScheduler
. That support requires the storage layer to allow batched retrieval of scheduled/recurring jobs, which is done by overriding the List-returningGetFirstByLowestScoreFromSet
.Hangire.Core's sheduler parallelism is implemented fully within the
DelayedJobScheduler
andRecurringJobScheduler
methods:scheduler
orrecurring-jobs
lock, respectivelyGetFirstByLowestScoreFromSet
to retrieve theList
of IDs for the to-be-enqueued jobsList
of IDsi. uses the
MaxDegreeOfParallelism
configured when Hangfire is startedii. enqueues each job, during which it acquires a distributed
job
lock, changes the job's state, and releases the distributed lockThe point of describing this is to note that the scope of the distributed
scheduler
andrecurring-jobs
locks don't change whether parallelism is used or not; as such, the database behavior for those locks will be the same either way.For the distributed
job
locks, the difference between sequential vs. parallel will be as follows:job
lock active at a timeMaxDegreeOfParallelism
job
locks active at one time; however, because these locks are represented as distinct rows in thelocks
table, there won't be blocking on those recordsIn either case (sequential or parallel enqueueing), workers will dequeue jobs as they always have.
Here are some results of profiling the scheduler performance with and without batching and parallelism. The PostgreSQL server is an out-of-the box installation running on a Windows machine with a 12-processor Intel i7 x64. The Hangfire server is an 8-processor Intel i7 x64.
On this hardware/software configuration, I hit the "knee" of the curve at parallelism degree 20. Your mileage may vary.