You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Right now when I have a job that fails (throws an Exception) and is requeued, the system immediately attempts to run it $maxRetries times and then fails altogether.
In the case where the error is caused by a network issue or a third-party system being down, waiting an amount of time before trying again would be beneficial.
I suggest implementing a field in mq-mysql\Api\Data\QueueMessageInterface named something like run_task_at. That way tasks can be scheduled to run in the future. And then have a retry interval, perhaps a number of seconds, configurable in StartConsumerCommand (and preferably the ceQueue xml as well - see #16).
I haven't looked into your mq-amqp module but I imagine that a similar thing could be implemented there too.
Thank you.
The text was updated successfully, but these errors were encountered:
Right now when I have a job that fails (throws an Exception) and is requeued, the system immediately attempts to run it
$maxRetries
times and then fails altogether.In the case where the error is caused by a network issue or a third-party system being down, waiting an amount of time before trying again would be beneficial.
I suggest implementing a field in
mq-mysql\Api\Data\QueueMessageInterface
named something likerun_task_at
. That way tasks can be scheduled to run in the future. And then have a retry interval, perhaps a number of seconds, configurable inStartConsumerCommand
(and preferably the ceQueue xml as well - see #16).I haven't looked into your
mq-amqp
module but I imagine that a similar thing could be implemented there too.Thank you.
The text was updated successfully, but these errors were encountered: