You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Based on the paper "Attached and detached closures in Actors", I would like to see if this idea seems interesting.
This RFC proposes a way to run attached closures by any actor as long as the owning actor and the actor that evaluates the closure run concurrently, not in parallel.
Motivation
Encore does not allow sharing attached closures because it can cause data races. However, attached closures may reach other actors via future chaining. If this happens, the future's fulfilling actor checks whether the closure is attached or detached and, if it is attached, the fulfilling actor sends the closure back to its owner for a safe evaluation.
If we forget the typing for now, attached closures could be sent to other actor and be evaluated by this other actor as long as (1) the closure's owning actor and the closure's evaluating actor are pinned to the same core or, (2) they are in different cores but do not execute in parallel. The benefits of (1) and (2) are increasing the concurrency of an actor 1. I sketch a simple solution to this problem which may not yet consider the case where the closure is shared among many actors.
Solution
The actor that runs the attached closure becomes an impostor of the owning actor. When this happens, the owning actor cannot run at the same time as the impostor actor. Implementation-wise, this could be setting a dirty bit in the address of the owning actor, which needs to be checked (might be too expensive...) before the owning actor can perform any work. If the impostor and the owning actor are placed in different cores and they both run in parallel, two things can happen:
The impostor sets the dirty bit faster and executes before the owning actor. This effectively means that the owning actor has done some work 2. The impostor can place the owning actor as blocked or as idle until it finishes the work, resuming the owning actor later on.
The owning actor executes faster and sets the dirty bit. Future chaining cannot happen in a safe synchronously way. The options are to send the closure back to the owning actor or place the closure in a task runner pinned to the same core as the owning actor, which can potentially speed up its safe execution (note about practicality or implementation in footnote 3).
Problems
This solution does not cause starvation but favours the current future chaining model for detached closures, i.e., the work of an actor is arbitrarily long. However, the impostor actor became the owning actor, which actually means doing work without having to go through the scheduler.
Footnotes
1: this may cause out of order execution. I am not sure if the programmer currently thinks in the order in which messages are executed when we use future chaining. ↩
2: notice that the impostor could potentially do even more work for the owning actor, such as processing the default number of messages. ↩
3: we can track down the safe execution by the task runner doing a bit more work when work-stealing happens. I am not sure if this implementation is actually possible due to too many atomic operations happening at the same time (or locks). ↩
The text was updated successfully, but these errors were encountered:
Nothing deep, but this seems analogous to the idea of running different method invocations of an active object concurrently (on different cores). If the methods are proven to not interfere, then there will be no problem (except due to false sharing), but if they do interfere, their execution needs to be coordinated, as you indicate in your text.
the closure's owning actor and the closure's evaluating actor are pinned to the same core
This seems like the more promising optimisation to me (as long as the actor being impersonated cannot be stolen by another scheduler while the impostor is running). It doesn't actually give more parallelism (the actors aren't running in parallel if they're on the same core), but it reduces the overhead of having the closure go via a message in the owning actor's queue.
If two actors are running on parallel cores, I think it is going to be too much work to figure out whether or not it is safe to run the closure synchronously, and if it requires blocking the owning actor it might actually prevent more parallelism than it is facilitating. Having detached closures is already a way of tracking which closures can safely be run synchronously.
this may cause out of order execution. I am not sure if the programmer currently thinks in the order in which messages are executed when we use future chaining.
I would generally think that future chaining means "I want this done as soon as the value is available". It could possibly induce surprising behaviour if a single actor chains two order-sensitive callbacks after each other, but the second one is one that can be run out-of-order. Not sure if this is a realistic scenario though.
A more serious issue is if an impostor actor can run a closure in an actor which is currently blocking on a future. This would mean that message atomicity is violated, which would make reasoning about actor behaviour very hard. Therefore, impostors should not be allowed to impersonate sleeping actors.
Based on the paper "Attached and detached closures in Actors", I would like to see if this idea seems interesting.
This RFC proposes a way to run attached closures by any actor as long as the owning actor and the actor that evaluates the closure run concurrently, not in parallel.
Motivation
Encore does not allow sharing attached closures because it can cause data races. However, attached closures may reach other actors via future chaining. If this happens, the future's fulfilling actor checks whether the closure is attached or detached and, if it is attached, the fulfilling actor sends the closure back to its owner for a safe evaluation.
If we forget the typing for now, attached closures could be sent to other actor and be evaluated by this other actor as long as (1) the closure's owning actor and the closure's evaluating actor are pinned to the same core or, (2) they are in different cores but do not execute in parallel. The benefits of (1) and (2) are increasing the concurrency of an actor 1. I sketch a simple solution to this problem which may not yet consider the case where the closure is shared among many actors.
Solution
The actor that runs the attached closure becomes an impostor of the owning actor. When this happens, the owning actor cannot run at the same time as the impostor actor. Implementation-wise, this could be setting a dirty bit in the address of the owning actor, which needs to be checked (might be too expensive...) before the owning actor can perform any work. If the impostor and the owning actor are placed in different cores and they both run in parallel, two things can happen:
The impostor sets the dirty bit faster and executes before the owning actor. This effectively means that the owning actor has done some work 2. The impostor can place the owning actor as blocked or as idle until it finishes the work, resuming the owning actor later on.
The owning actor executes faster and sets the dirty bit. Future chaining cannot happen in a safe synchronously way. The options are to send the closure back to the owning actor or place the closure in a task runner pinned to the same core as the owning actor, which can potentially speed up its safe execution (note about practicality or implementation in footnote 3).
Problems
This solution does not cause starvation but favours the current future chaining model for detached closures, i.e., the work of an actor is arbitrarily long. However, the impostor actor became the owning actor, which actually means doing work without having to go through the scheduler.
Footnotes
1: this may cause out of order execution. I am not sure if the programmer currently thinks in the order in which messages are executed when we use future chaining. ↩
2: notice that the impostor could potentially do even more work for the owning actor, such as processing the default number of messages. ↩
3: we can track down the safe execution by the task runner doing a bit more work when work-stealing happens. I am not sure if this implementation is actually possible due to too many atomic operations happening at the same time (or locks). ↩
The text was updated successfully, but these errors were encountered: