Concurrency inside of a given process #433
Replies: 2 comments 7 replies
-
The case you are describing is not very uncommon however it is a bit complex in practice. I do not think this is feasible currently with BullMQ. I have in the drawing board the concept of "demuxers" that basically works like this, you have N "pseudo-queues" that are all connected to a demux that picks in a round robin fashion jobs from the N "pseudo queues" and are processed by M workers connected to the demux. However this feature is still in design stage and will take some time (months) before it can be used in production... |
Beta Was this translation helpful? Give feedback.
-
This answer is a bit late but it sounds like Flows is what you're looking for. |
Beta Was this translation helpful? Give feedback.
-
I've been trying all kinds of patterns to achieve what I need, and I've had only limited success so far.
Let's say I have a job type that's very CPU-intensive. I want to be able to process multiple jobs of that type at the same time, so I scaled horizontally and created multiple worker instances in different servers.
The one particular constraint I have is that for each user (< 50 users, let's say 30 for this example) there must not be more than one job running at any given time. So if user1, user2, user3, and user4 all start a job at the same time, fine they can all be executed in parallel by the servers. But if user1 created 4 jobs in a row, I need to wait for job1 to finish before job2 starts (because the result of job1 can influence the execution of job2).
To achieve this, the only way I know of is to use one queue per user, with a concurrency of 1. The problem with this approach is that if I register a worker for each of those queues, and I have 30 queues, there's the potential of running 30 jobs at once on a server (one job in each queue). Since these jobs are CPU-intensive, they basically bring the server down to a crawl.
The fix seems easy in theory: limit the concurrency inside a given server to N, ideally the number of cores. So if there is only one server with two cores, and 4 users register jobs in 4 queues, the first two will be picked up and executed, and the third one will wait for the first one to finish.
Is there a way to do this? I feel like I tried everything but maybe there's a simpler solution I didn't think about.
Beta Was this translation helpful? Give feedback.
All reactions