Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kernel: Add support for stopping workqueues #82614

Merged
merged 3 commits into from
Dec 12, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 18 additions & 0 deletions include/zephyr/kernel.h
Original file line number Diff line number Diff line change
Expand Up @@ -3606,6 +3606,22 @@ int k_work_queue_drain(struct k_work_q *queue, bool plug);
*/
int k_work_queue_unplug(struct k_work_q *queue);

/** @brief Stop a work queue.
*
* Stops the work queue thread and ensures that no further work will be processed.
* This call is blocking and guarantees that the work queue thread has terminated
* cleanly if successful, no work will be processed past this point.
*
* @param queue Pointer to the queue structure.
* @param timeout Maximum time to wait for the work queue to stop.
*
* @retval 0 if the work queue was stopped
* @retval -EALREADY if the work queue was not started (or already stopped)
* @retval -EBUSY if the work queue is actively processing work items
* @retval -ETIMEDOUT if the work queue did not stop within the stipulated timeout
*/
int k_work_queue_stop(struct k_work_q *queue, k_timeout_t timeout);

/** @brief Initialize a delayable work structure.
*
* This must be invoked before scheduling a delayable work structure for the
Expand Down Expand Up @@ -3915,6 +3931,8 @@ enum {
K_WORK_QUEUE_DRAIN = BIT(K_WORK_QUEUE_DRAIN_BIT),
K_WORK_QUEUE_PLUGGED_BIT = 3,
K_WORK_QUEUE_PLUGGED = BIT(K_WORK_QUEUE_PLUGGED_BIT),
K_WORK_QUEUE_STOP_BIT = 4,
K_WORK_QUEUE_STOP = BIT(K_WORK_QUEUE_STOP_BIT),

/* Static work queue flags */
K_WORK_QUEUE_NO_YIELD_BIT = 8,
Expand Down
22 changes: 22 additions & 0 deletions include/zephyr/tracing/tracing.h
Original file line number Diff line number Diff line change
Expand Up @@ -427,6 +427,28 @@
*/
#define sys_port_trace_k_work_queue_start_exit(queue)

/**
* @brief Trace stop of a Work Queue call entry
* @param queue Work Queue structure
* @param timeout Timeout period
*/
#define sys_port_trace_k_work_queue_stop_enter(queue, timeout)

/**
* @brief Trace stop of a Work Queue call blocking
* @param queue Work Queue structure
* @param timeout Timeout period
*/
#define sys_port_trace_k_work_queue_stop_blocking(queue, timeout)

/**
* @brief Trace stop of a Work Queue call exit
* @param queue Work Queue structure
* @param timeout Timeout period
* @param ret Return value
*/
#define sys_port_trace_k_work_queue_stop_exit(queue, timeout, ret)

/**
* @brief Trace Work Queue drain call entry
* @param queue Work Queue structure
Expand Down
41 changes: 41 additions & 0 deletions kernel/work.c
Original file line number Diff line number Diff line change
Expand Up @@ -653,6 +653,12 @@ static void work_queue_main(void *workq_ptr, void *p2, void *p3)
* submissions.
*/
(void)z_sched_wake_all(&queue->drainq, 1, NULL);
} else if (flag_test(&queue->flags, K_WORK_QUEUE_STOP_BIT)) {
/* User has requested that the queue stop. Clear the status flags and exit.
*/
flags_set(&queue->flags, 0);
k_spin_unlock(&lock, key);
return;
Comment on lines +656 to +661
Copy link
Member

@TaiJuWu TaiJuWu Dec 7, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pardon me.
If we allow task append to the queue continuously after setting K_WORK_QUEUE_STOP_BIT, the block will never be executed.
Is it excepted?

Copy link
Contributor Author

@Mattemagikern Mattemagikern Dec 7, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A pre-condition to the _stop(..) function is that the queue has already been plugged.

static inline int queue_submit_locked(struct k_work_q *queue,
				      struct k_work *work)
{
	 * * -ENODEV if the queue isn't running.
	 * * -EBUSY if draining and not chained
	 * * -EBUSY if plugged and not draining
	 * * otherwise OK

So the case you're thinking of is:

Thread a {
          while (true) {
                   queue_submit_locked(....);
         }
}

Thread b {
    k_work_queue_drain(..., true);
    k_work_queue_stop(..., K_FOREVER);
}

Would thread b. freeze indefenately?
Initially, maybe 👍

Question is if we should disallow that append of work if the K_WORK_QUEUE_STOP_BIT is set as well.
I'm not sure. How do you reason?

Copy link
Member

@TaiJuWu TaiJuWu Dec 8, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, I missed we need to call k_work_queue_drain before stopping workqueue, but the design make me thinks about this scenario will happen?

Thread a {
          while (true) {
                   queue_submit_locked(....);
         }
}

Thread b    k_work_queue_drain(..., true);
Thread c    k_work_queue_drain(..., true);
Thread b    k_work_queue_stop(..., K_FOREVER);
Thread c    k_work_queue_unplug(...);

Base on doc

The workqueue API is designed to be safe when invoked from multiple threads and interrupts. 

I think it is possible so we need to care about this problem. WDYT?

In general, I prefer

  1. Making K_WORK_QUEUE_STOP_BIT being isolated instead depend on K_WORK_QUEUE_PLUGGED_BIT.
  2. Disallow append if K_WORK_QUEUE_STOP_BIT is set.
  3. We don't clear K_WORK_QUEUE_STOP_BIT forever even if timeout avoid appending work to workqueue and documented it.

What do you and others think about this idea?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds like a great opportunity for additional test cases

Copy link
Contributor Author

@Mattemagikern Mattemagikern Dec 8, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@TaiJuWu

Another option is to forbid the unplug function if K_WORK_QUEUE_STOP_BIT is set.

Thread a {
          while (true) {
                   queue_submit_locked(....); <- Error once K_WORK_QUEUE_PLUGGED_BIT is set & not draining
         }
}

Thread b    k_work_queue_drain(..., true);
Thread c    k_work_queue_drain(..., true);
Thread b    k_work_queue_stop(..., K_FOREVER); <- Would exit in finite time
Thread c    k_work_queue_unplug(...); <- Would yield error & not unplug. 

Either thread B or C would return an error, depending on which accessed the queue first.
I believe this is the smallest change we could make to implement this feature. It shouldn't affect current use cases, what do you think?

@cfriedt Agreed!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The case I'm making could still deadlock thread b, if the queue never drains the queue will continue to accept works.

So we probbably need the 2. You outlined earlier as well.

  1. Disallow append if K_WORK_QUEUE_STOP_BIT is set.

But that changes the behaviour of the queue & drain system slightly, is that ok?

Copy link
Member

@TaiJuWu TaiJuWu Dec 9, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another option is to forbid the unplug function if K_WORK_QUEUE_STOP_BIT is set.

Sound great.

But that changes the behaviour of the queue & drain system slightly, is that ok?

It is ok for me but we still need @andyross or other maintainers comfirm since the caller is time-sensitive.

By the way, I am not sure we need to call k_work_queue_drain before k_work_queue_stop by users is good api design or not.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By the way, I am not sure we need to call k_work_queue_drain before k_work_queue_stop by users is good api design or not.

Myself, being biased (haha), think it is neat :P The queue_drain(..., true) gives the user the knowledge that the works scheduled up to that point will be executed and in so doing letting free-routines kick in as to not casue memory leaks in the application.
The k_work_queue_stop(...) stopps the thread as soon as the last job has been processed.

I'll commit the changes that we outlined here today :)

Copy link
Member

@TaiJuWu TaiJuWu Dec 10, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we need to call queue_drain(..., true) in k_work_queue_stop internally and set K_WORK_QUEUE_STOP_BIT berfore queue_drain(..., true)
Is it possible in current design?

TA                                     TB
queue_drain(..., true)
                                    k_work_queue_stop but flag is not set
queue_unplug
                                   flag is set

Additionally, System Workqueue should never stop I guess.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Additionally, System Workqueue should never stop I guess.

The patch specifically addresses scenarios where stopping a workqueue becomes interesting, like in a couple of tests I have in mind.
I believe the patch aligns with the broader system goals. If this approach conflicts with project guidelines or expectations, I expect that the maintainers wil let me know shortly haha 🥲

I think we need to call queue_drain(..., true) in k_work_queue_stop internally and set K_WORK_QUEUE_STOP_BIT berfore queue_drain(..., true)
Is it possible in current design?

I've just looked through the kernel/work.c file, it would be a neat addition, one less function call from the users to achive the same effect. What's stopping me is the k_work_queue_drain(...) since the current implementation is blocking ( K_FOREVER ) and to resolve that I would need to rework that function to include a timeout and then manage to make the timeout time be split between the draining & the stopping of the thread (the stopping of the queue would be, almost, instantanious at that point if we've passed the drain stage).
@cfriedt @andyross How would you like me to proceede with:

  • to sort out the_drain(..) to include a timeout and incorperate parts of that function in the _stop(...)

or

  • keep it as is with the addition of blocking _unplug(..) and stop new works to be scheduled if the stop bit is set?

Regarding the race-condition you're (@TaiJuWu) outlining I don't think it is a problem since the getting & setting of the flags are protected by the spin_lock, the winner of the spin_lock would dictate the outcome.

} else {
/* No work is available and no queue state requires
* special handling.
Expand Down Expand Up @@ -812,6 +818,41 @@ int k_work_queue_unplug(struct k_work_q *queue)
return ret;
}

int k_work_queue_stop(struct k_work_q *queue, k_timeout_t timeout)
{
__ASSERT_NO_MSG(queue);

SYS_PORT_TRACING_OBJ_FUNC_ENTER(k_work_queue, stop, queue, timeout);
k_spinlock_key_t key = k_spin_lock(&lock);

if (!flag_test(&queue->flags, K_WORK_QUEUE_STARTED_BIT)) {
k_spin_unlock(&lock, key);
SYS_PORT_TRACING_OBJ_FUNC_EXIT(k_work_queue, stop, queue, timeout, -EALREADY);
return -EALREADY;
}

if (!flag_test(&queue->flags, K_WORK_QUEUE_PLUGGED_BIT)) {
k_spin_unlock(&lock, key);
SYS_PORT_TRACING_OBJ_FUNC_EXIT(k_work_queue, stop, queue, timeout, -EBUSY);
return -EBUSY;
}

flag_set(&queue->flags, K_WORK_QUEUE_STOP_BIT);
notify_queue_locked(queue);
k_spin_unlock(&lock, key);
SYS_PORT_TRACING_OBJ_FUNC_BLOCKING(k_work_queue, stop, queue, timeout);
if (k_thread_join(&queue->thread, timeout)) {
key = k_spin_lock(&lock);
flag_clear(&queue->flags, K_WORK_QUEUE_STOP_BIT);
k_spin_unlock(&lock, key);
SYS_PORT_TRACING_OBJ_FUNC_EXIT(k_work_queue, stop, queue, timeout, -ETIMEDOUT);
return -ETIMEDOUT;
}

SYS_PORT_TRACING_OBJ_FUNC_EXIT(k_work_queue, stop, queue, timeout, 0);
return 0;
}

#ifdef CONFIG_SYS_CLOCK_EXISTS

/* Timeout handler for delayable work.
Expand Down
3 changes: 3 additions & 0 deletions subsys/tracing/ctf/tracing_ctf.h
Original file line number Diff line number Diff line change
Expand Up @@ -90,6 +90,9 @@ extern "C" {
#define sys_port_trace_k_work_queue_init(queue)
#define sys_port_trace_k_work_queue_start_enter(queue)
#define sys_port_trace_k_work_queue_start_exit(queue)
#define sys_port_trace_k_work_queue_stop_enter(queue, timeout)
#define sys_port_trace_k_work_queue_stop_blocking(queue, timeout)
#define sys_port_trace_k_work_queue_stop_exit(queue, timeout, ret)
#define sys_port_trace_k_work_queue_drain_enter(queue)
#define sys_port_trace_k_work_queue_drain_exit(queue, ret)
#define sys_port_trace_k_work_queue_unplug_enter(queue)
Expand Down
11 changes: 11 additions & 0 deletions subsys/tracing/sysview/tracing_sysview.h
Original file line number Diff line number Diff line change
Expand Up @@ -180,6 +180,17 @@ void sys_trace_thread_info(struct k_thread *thread);
#define sys_port_trace_k_work_queue_start_exit(queue) \
SEGGER_SYSVIEW_RecordEndCall(TID_WORK_QUEUE_START)

#define sys_port_trace_k_work_queue_stop_enter(queue, timeout) \
SEGGER_SYSVIEW_RecordU32x2(TID_WORK_QUEUE_STOP, (uint32_t)(uintptr_t)queue, \
(uint32_t)timeout.ticks)

#define sys_port_trace_k_work_queue_stop_blocking(queue, timeout) \
SEGGER_SYSVIEW_RecordU32x2(TID_WORK_QUEUE_STOP, (uint32_t)(uintptr_t)queue, \
(uint32_t)timeout.ticks)

#define sys_port_trace_k_work_queue_stop_exit(queue, timeout, ret) \
SEGGER_SYSVIEW_RecordEndCallU32(TID_WORK_QUEUE_STOP, (uint32_t)ret)

#define sys_port_trace_k_work_queue_drain_enter(queue) \
SEGGER_SYSVIEW_RecordU32(TID_WORK_QUEUE_DRAIN, (uint32_t)(uintptr_t)queue)

Expand Down
1 change: 1 addition & 0 deletions subsys/tracing/sysview/tracing_sysview_ids.h
Original file line number Diff line number Diff line change
Expand Up @@ -129,6 +129,7 @@ extern "C" {
#define TID_WORK_SUBMIT_TO_QUEUE (100u + TID_OFFSET)
#define TID_WORK_QUEUE_UNPLUG (101u + TID_OFFSET)
#define TID_WORK_QUEUE_INIT (102u + TID_OFFSET)
#define TID_WORK_QUEUE_STOP (103u + TID_OFFSET)

#define TID_FIFO_INIT (110u + TID_OFFSET)
#define TID_FIFO_CANCEL_WAIT (111u + TID_OFFSET)
Expand Down
3 changes: 3 additions & 0 deletions subsys/tracing/test/tracing_test.h
Original file line number Diff line number Diff line change
Expand Up @@ -81,6 +81,9 @@
#define sys_port_trace_k_work_queue_init(queue)
#define sys_port_trace_k_work_queue_start_enter(queue)
#define sys_port_trace_k_work_queue_start_exit(queue)
#define sys_port_trace_k_work_queue_stop_enter(queue, timeout)
#define sys_port_trace_k_work_queue_stop_blocking(queue, timeout)
#define sys_port_trace_k_work_queue_stop_exit(queue, timeout, ret)
#define sys_port_trace_k_work_queue_drain_enter(queue)
#define sys_port_trace_k_work_queue_drain_exit(queue, ret)
#define sys_port_trace_k_work_queue_unplug_enter(queue)
Expand Down
3 changes: 3 additions & 0 deletions subsys/tracing/user/tracing_user.h
Original file line number Diff line number Diff line change
Expand Up @@ -155,6 +155,9 @@ void sys_trace_gpio_fire_callback_user(const struct device *port, struct gpio_ca
#define sys_port_trace_k_work_queue_init(queue)
#define sys_port_trace_k_work_queue_start_enter(queue)
#define sys_port_trace_k_work_queue_start_exit(queue)
#define sys_port_trace_k_work_queue_stop_enter(queue, timeout)
#define sys_port_trace_k_work_queue_stop_blocking(queue, timeout)
#define sys_port_trace_k_work_queue_stop_exit(queue, timeout, ret)
#define sys_port_trace_k_work_queue_drain_enter(queue)
#define sys_port_trace_k_work_queue_drain_exit(queue, ret)
#define sys_port_trace_k_work_queue_unplug_enter(queue)
Expand Down
59 changes: 59 additions & 0 deletions tests/kernel/workq/work_queue/src/start_stop.c
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably would be best to rename to workq/worq_api, but I'm not blocking.

Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
/*
* Copyright (c) 2024 Måns Ansgariusson <[email protected]>
*
* SPDX-License-Identifier: Apache-2.0
*/

#include <zephyr/kernel.h>
#include <zephyr/ztest.h>

#define NUM_TEST_ITEMS 10
/* In fact, each work item could take up to this value */
#define WORK_ITEM_WAIT_ALIGNED \
k_ticks_to_ms_floor64(k_ms_to_ticks_ceil32(CONFIG_TEST_WORK_ITEM_WAIT_MS) + _TICK_ALIGN)
#define CHECK_WAIT ((NUM_TEST_ITEMS + 1) * WORK_ITEM_WAIT_ALIGNED)

static K_THREAD_STACK_DEFINE(work_q_stack, 1024 + CONFIG_TEST_EXTRA_STACK_SIZE);

static void work_handler(struct k_work *work)
{
ARG_UNUSED(work);
k_msleep(CONFIG_TEST_WORK_ITEM_WAIT_MS);
}

ZTEST(workqueue_api, test_k_work_queue_stop)
{
size_t i;
struct k_work work;
struct k_work_q work_q;
struct k_work works[NUM_TEST_ITEMS];
struct k_work_queue_config cfg = {
.name = "test_work_q",
.no_yield = true,
};

zassert_equal(k_work_queue_stop(&work_q, K_FOREVER), -EALREADY,
"Succeeded to stop work queue on non-initialized work queue");
k_work_queue_start(&work_q, work_q_stack, K_THREAD_STACK_SIZEOF(work_q_stack),
K_PRIO_PREEMPT(4), &cfg);

for (i = 0; i < NUM_TEST_ITEMS; i++) {
k_work_init(&works[i], work_handler);
zassert_equal(k_work_submit_to_queue(&work_q, &works[i]), 1,
"Failed to submit work item");
}

/* Wait for the work item to complete */
k_sleep(K_MSEC(CHECK_WAIT));

zassert_equal(k_work_queue_stop(&work_q, K_FOREVER), -EBUSY,
"Succeeded to stop work queue while it is running & not plugged");
zassert_true(k_work_queue_drain(&work_q, true) >= 0, "Failed to drain & plug work queue");
zassert_ok(k_work_queue_stop(&work_q, K_FOREVER), "Failed to stop work queue");

k_work_init(&work, work_handler);
zassert_equal(k_work_submit_to_queue(&work_q, &work), -ENODEV,
"Succeeded to submit work item to non-initialized work queue");
}

ZTEST_SUITE(workqueue_api, NULL, NULL, NULL, NULL, NULL);
Loading