You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Has there been any progress towards multiple particle support in pymc3 since this issue?
Although we intend to integrate anyway, there is a payoff in terms of overall computational cost for using multiple particles. In theory the computational cost of running one particle for 100 epochs or 100 particles for one should be the same. However, running multiple particles allows use of matrix math. Perhaps Theano makes this a non-issue.
Not much happened to multiple particle support yet. The issue to solve is how to save them in a trace (we do have a MultiTrace that can do this) and access them, so perhaps it's better to keep an internal trace in the sampler itself. I suppose you solve this problem already here?
I think the first version can just loop over all particles in the internal trace. Once we have that it's easy to place this into a theano.map call where it could be parallelized e.g. by GPU.
This looks really neat. What do you think of adding this sampler to pymc3 where it would be directly usable in a probabilistic programming framework?
It's pretty easy to add new samplers, for example: https://github.com/pymc-devs/pymc3/blob/master/pymc3/step_methods/nuts.py#L14
The text was updated successfully, but these errors were encountered: