-
Notifications
You must be signed in to change notification settings - Fork 434
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cleanup of https://github.com/ros2/rclcpp/pull/2683 #2714
base: rolling
Are you sure you want to change the base?
Conversation
@mjcarroll @wjwwood @fujitatomoya @alsora |
There is still one issue going on, were I don't know if its a bug in the rmw or not. If you add a guard condition twice to a rcl_waitsets, the behavior is odd. The problem here, is that this breaks code like this : rcl_guard_condition_t * cond = &guard_condition_->get_rcl_guard_condition();
size_t idx1;
rcl_wait_set_add_guard_condition(&wait_set, cond, &idx1);
size_t idx2;
rcl_wait_set_add_guard_condition(&wait_set, cond, &idx2);
guard_condition_->trigger();
rcl_wait(&wait_set, 1);
if(wait_set.guard_conditions[idx1] != nullptr)
{
// do something
}
// this entry will be a nullptr, even though it should not.
if(wait_set.guard_conditions[idx2] != nullptr)
{
// do something
}
|
Which RMW are you testing with? While the behavior should in theory be consistent across them, they all implement it fairly differently. |
@clalancette I used the default, so this should have been fastDDS. |
9f16ee1
to
ff56953
Compare
We discussed this in the ROS 2 PMC meeting as well as in the latest client library working group meeting, but for everyone else's benefit the conclusion was that adding the same entity (guard condition, timer, sub, etc...) to a wait set should be an error and we shouldn't require the rmw layer to allow it and leave both non-null. I believe the agreement was to make the check in the functions that add the entities to the wait set, but we may also try to avoid this in the calling code in rclcpp. |
Signed-off-by: Janosch Machowinski <[email protected]>
…ditions A waitable should make sure, that all entities it adds to the waitset are alive during the wait. The ExecutorNotifyWaitable did not do this, leading to crashes, if a callback group was dropped while waiting. This commit changes this behavior, by holding shared pointers to the used guard condition. Also while at it, fixed a possible race, were a trigger could get lost. Optimized the is_ready call by using the indices returned by rcl. Signed-off-by: Janosch Machowinski <[email protected]>
ff56953
to
46bb085
Compare
Pulls: #2714 |
The test was adding the same guard condition twice. This not allowed. Signed-off-by: Janosch Machowinski <[email protected]>
@mjcarroll can you have a look at 5eaf3dc ? |
I had to rebuild a little bit of context here around this test. I think that the original intention was to see that the guard_condition associated with the node's callback group is only fired in the case that the contents of the callback group changed. This is why we spin, see that the execute happens, and then spin again to verify that it doesn't happen. I think that there could be other ways to test that this happens without relying on adding the guard condition to the waitset twice. |
Aside from that, this looks good. If we need to come up with another way to test, then that's fine. |
We did a quick fix of #2664 in order to get it merged fast in jazzy.
This is the proper fix for the issue reported in #2664