-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CancellableQueueSynchronizer and ReadWriteMutex #2045
base: develop
Are you sure you want to change the base?
Conversation
51dc99b
to
a456122
Compare
a456122
to
a677bbc
Compare
Here is a link to a successful nightly build with these changes: https://teamcity.jetbrains.com/buildConfiguration/KotlinTools_KotlinxCoroutines_NightlyStress/3009472 |
arrived.loop { cur -> | ||
// Are we going to be resumed? | ||
// The resumption permit should be refused in this case. | ||
if (cur == parties.toLong()) return false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A small problem here. I believe this should be cur <= parties.toLong()
, but I think it's interesting why so and, more importantly, why it didn't cause any problems.
parties - 1
calls happen and go to sleep; then comes the call that is about to wake everyone up but doesn't yet do anything after incrementing the counter; then, 1000 new calls happen, and a == parties + 1000
. Then, every thread is cancelled, performs onCancellation
and, since arrived
is much bigger than parties.toLong()
, successfully decrements arrived
. Then the resumer comes, sees parties - 1
CANCELLED
marks and proceeds to wake up parties - 1
more cells.
This works only because this behavior, while not optimal, doesn't break anything: after the cells are all woken up, this SQS is never used again. However, if resumeMode
was set to SYNC
, everything would hang in this scenario, but it seems that with the change above it would work fine in this case too.
It should be noted though that without the check for cur == parties.toLong()
the code would not be linearizable: as is, it guarantees that once the barrier is broken, it can't be accidentally "un-broken" no matter how many cancellations happen, and so the subsequent calls to arrive
all return false
in any case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It should be either >=
or ==
. In my opinion, it is always better to decrement the counter if possible, so that the number of resume
invocations is decreased. We should also add a check that the barrier is completed at the beginning of arrive
. Thus, the last arrive()
works in O(T+K) where K is the number of parties and T is the number of threads (not coroutines).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Comments about the first comment.
kotlinx-coroutines-core/common/src/internal/SegmentQueueSynchronizer.kt
Outdated
Show resolved
Hide resolved
kotlinx-coroutines-core/common/src/internal/SegmentQueueSynchronizer.kt
Outdated
Show resolved
Hide resolved
kotlinx-coroutines-core/common/src/internal/SegmentQueueSynchronizer.kt
Outdated
Show resolved
Hide resolved
kotlinx-coroutines-core/common/src/internal/SegmentQueueSynchronizer.kt
Outdated
Show resolved
Hide resolved
kotlinx-coroutines-core/common/src/internal/SegmentQueueSynchronizer.kt
Outdated
Show resolved
Hide resolved
kotlinx-coroutines-core/common/src/internal/SegmentQueueSynchronizer.kt
Outdated
Show resolved
Hide resolved
kotlinx-coroutines-core/common/src/internal/SegmentQueueSynchronizer.kt
Outdated
Show resolved
Hide resolved
kotlinx-coroutines-core/common/src/internal/SegmentQueueSynchronizer.kt
Outdated
Show resolved
Hide resolved
kotlinx-coroutines-core/common/src/internal/SegmentQueueSynchronizer.kt
Outdated
Show resolved
Hide resolved
kotlinx-coroutines-core/common/src/internal/SegmentQueueSynchronizer.kt
Outdated
Show resolved
Hide resolved
kotlinx-coroutines-core/common/src/internal/SegmentQueueSynchronizer.kt
Outdated
Show resolved
Hide resolved
kotlinx-coroutines-core/common/src/internal/SegmentQueueSynchronizer.kt
Outdated
Show resolved
Hide resolved
kotlinx-coroutines-core/common/src/internal/SegmentQueueSynchronizer.kt
Outdated
Show resolved
Hide resolved
kotlinx-coroutines-core/common/src/internal/SegmentQueueSynchronizer.kt
Outdated
Show resolved
Hide resolved
kotlinx-coroutines-core/common/src/internal/SegmentQueueSynchronizer.kt
Outdated
Show resolved
Hide resolved
kotlinx-coroutines-core/common/src/internal/SegmentQueueSynchronizer.kt
Outdated
Show resolved
Hide resolved
kotlinx-coroutines-core/common/src/internal/SegmentQueueSynchronizer.kt
Outdated
Show resolved
Hide resolved
kotlinx-coroutines-core/common/src/internal/SegmentQueueSynchronizer.kt
Outdated
Show resolved
Hide resolved
kotlinx-coroutines-core/common/src/internal/SegmentQueueSynchronizer.kt
Outdated
Show resolved
Hide resolved
kotlinx-coroutines-core/common/src/internal/SegmentQueueSynchronizer.kt
Outdated
Show resolved
Hide resolved
8c364c9
to
8137b68
Compare
d9e5d6b
to
347021d
Compare
We should add a comment here once the PR is merged |
347021d
to
509b651
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A small batch of suggestions for now.
kotlinx-coroutines-core/common/src/internal/ConcurrentLinkedList.kt
Outdated
Show resolved
Hide resolved
8f90638
to
b330b86
Compare
b330b86
to
60f9bea
Compare
60f9bea
to
54d5f05
Compare
It's important not to forget about #2401 when introducing RWMutex |
Just out of curiosity, why did this die? It seems well-written/documented (can't comment on the implementation) |
There were unresolved questions about public API shape and how to expose read/write parts. |
…k` so that it supports buffered channels and pre-allocates elements. Signed-off-by: Nikita Koval <[email protected]>
…ments Signed-off-by: Nikita Koval <[email protected]>
…is detected Signed-off-by: Nikita Koval <[email protected]>
…primitives and `ReadWriteMutex`
…primitives and `ReadWriteMutex` Signed-off-by: Nikita Koval <[email protected]>
…primitives and `ReadWriteMutex` Signed-off-by: Nikita Koval <[email protected]>
54d5f05
to
b2ed1d6
Compare
That was merged earlier this year (congrats to all btw, big PR!). Has there been any internal progress on this since then? |
@Tmpod we are evaluating this change on IDEA codebase. |
Could you please elaborate on your interest in this primitive? E.g. what use-cases are you aiming to solve with it?
I am looking to improve the performance of some datastructures which are
temporarily using just a single mutex both for reading and writing.
Using a readers-writer lock seems like a good option.[1]
I've seen a lot of other people, in issues and the likes, asking for
this type of synchronization primitive as well.
I may end up writing my own primitive, based on code shared around here,
for the time being, but having a high-quality implementation in the
library would be really useful (though quite hard, I know!).
[1]: It should be noted that updating this datastructure cannot be done
atomically since it's quite non-trivial.
|
Introduce
SegmentQueueSynchronizer
abstraction for synchronization primitives andReadWriteMutex
. Please follow the provided documentation and the test suite for details.