How to specify initial state distribution when using stepthrough #468
-
Hi, I am trying to figure out from Julia POMDP's documentation how to specify an initial state distribution when running stepthrough simulations. But I can not find a good example and the documentation is not very clear. I am trying this:
I tried a bunch of permutations, but always seem to get some variation of What is the right way of doing this? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
The docs are here https://juliapomdp.github.io/POMDPs.jl/stable/POMDPTools/simulators/#POMDPTools.Simulators.stepthrough, but there are a lot of options, so it can be confusing. If it is an MDP, ust sample from the distribution before calling stepthrough: initial_state_dist = Deterministic( ... )
initial_state = rand(initial_state_dist)
for (a, b, s) in stepthrough(mdp, planner, initial_state, "a,b,s", max_steps=50)
...
end If it is a POMDP, you have to explicitly pass the updater, and then the initial state distribution and initial belief are the same thing. initial_state_dist = Deterministic( ... )
up = updater(planner)
for (a, b, s) in stepthrough(pomdp, planner, up, initial_state_dist, "a,b,s", max_steps=50)
...
end |
Beta Was this translation helpful? Give feedback.
The docs are here https://juliapomdp.github.io/POMDPs.jl/stable/POMDPTools/simulators/#POMDPTools.Simulators.stepthrough, but there are a lot of options, so it can be confusing.
If it is an MDP, ust sample from the distribution before calling stepthrough:
If it is a POMDP, you have to explicitly pass the updater, and then the initial state distribution and initial belief are the same thing.