You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In PR #300, I fixed the spike loss between run calls in general (when there are delays and spikes are still in queues or circular eventspaces). This also works when changing heterogeneous delays between run calls. But it currently fails when changing a scalar delay and also when changing between scalar and heterogeneous delays.
For changing the from one scalar delay to another: I think the eventspaces are created correctly and not deleted between runs, some debugging showed that they are even populated with the spikes from a previous run. I'm not exactly sure what is going wrong here... Leaving this for another time.
For changing between heterogenenous and scalar delays, we would have to somehow copy the spikes between the two methods (since for heterogenenous delays we use vectors in device code as queues, and for scalar delays we use circular eventspaces managed on host).
The text was updated successfully, but these errors were encountered:
Without having thought about this in depth: how would you actually change between scalar and heterogeneous delays? I don't think our syntax allows for it.
Here, the delays will be heterogeneous during the first and homogeneous during the second run. The homogeneous delay is only detected in the generated code, but it still leads to a different algorithm (using a circular list of multiple eventspaces instead of a spikequeue implementation).
This is a follow-up of #83
In PR #300, I fixed the spike loss between
run
calls in general (when there are delays and spikes are still in queues or circular eventspaces). This also works when changing heterogeneous delays betweenrun
calls. But it currently fails when changing a scalar delay and also when changing between scalar and heterogeneous delays.The text was updated successfully, but these errors were encountered: