Skip to content

Latest commit

 

History

History
17 lines (12 loc) · 1.02 KB

README.md

File metadata and controls

17 lines (12 loc) · 1.02 KB

MuZero PyTorch

Implementation of Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model by DeepMind
for CartPole-v0 environment.

  • MuZero + naive tree search is working.
  • MuZero + monte carlo tree search (MCTS) is now working.
    (search policy has to be approx. uniform at the first episode. If not then restart)
  • Improvements: more tricks/hacks for better MCTS training.

MCTS results

training_mcts

Naive tree search results

Search in the fully expanded tree at depth n the maximum discounted value (+ discounted rewards).
Take the action which is the first action from the root to the maximum node.

cartpole_naive_tree_search training_naive_tree_search