Skip to content

Latest commit

 

History

History
16 lines (14 loc) · 629 Bytes

README.md

File metadata and controls

16 lines (14 loc) · 629 Bytes

Multi-armed-bandit-research

The main purpose of this project is to try different multi-armed bandits - Epsilon-greedy, UCB, Thompson Sampling in different environments - Bernoulli, Gaussian

Getting started

Run the following command to clone repository:

git clone https://github.com/Stepan-Makarenko/Multi-armed-bandit-research.git

Run the following command to run experiment configured by gaussian_testbed.json:

python3 run.py --config ./configs/gaussian_testbed.json -pp ./plots/gausian_testbed

Output plots

1

2