Capped Queues
#1285
Replies: 1 comment 1 reply
-
I think the best would be to have a feature where you set the max number of waiting jobs and if you try to add when the max is reached throw an exception. However in your case, since you are using a lot of queues this will not be very practicable... one solution would be to give the error based on a maximum amount of memory usage, but according to Redis, the INFO command is slow, so I am not sure this can be done for every "add" call without severely impacting performance. The other alternative would be to use only 1 queue and use the groups' functionality of the Pro version: https://docs.bullmq.io/bullmq-pro/groups |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Context: we are using BullMQ to run a large number of background processes (hundreds of queues) for a multi-tenant product, on a single Redis cluster. At the moment, most of our job entry points (i.e., API/HTTP requests, UI actions, plain files transfers, etc that would result in the adding of jobs in Bull) are not doing any check on the current storage usage of our cluster. Meaning, one of our tenants could quickly fill up one of its Bull queues (faster than jobs will get processed), and make our cluster OOM.
What would be the best (i.e., least expensive) way to do something like Capped Queues (e.g., that given queue cannot have more than 10k waiting jobs at a given time) in Bull? One approach i'd like would be an async process running every few seconds, checking the number of waiting jobs in a queue, and switching the overflown queues into a state where they will still be processing jobs, but won't accept new ones.
Beta Was this translation helpful? Give feedback.
All reactions