Skip to content

Latest commit

 

History

History
45 lines (30 loc) · 2.31 KB

README.md

File metadata and controls

45 lines (30 loc) · 2.31 KB

This repository contains source code of the paper:

Liyuan Zheng, Tanner Fiez, Zane Alumbaugh, Benjamin Chasnov, and Lillian J. Ratliff, "Stackelberg Actor-Critic: Game-Theoretic Reinforcement Learning Algorithms", AAAI 2022. [ArXiv]

We use the Spinning up framework for Stackelberg actor critic implementation.

spinup/run_experiments.py is a sample script to run vanilla actor-critic, DDPG, SAC and their Stackelberg versions.

The implementation of algorithms can be found at:

example.ipynb contains the code for the motivation example in Section 3.2.

Welcome to Spinning Up in Deep RL!

This is an educational resource produced by OpenAI that makes it easier to learn about deep reinforcement learning (deep RL).

For the unfamiliar: reinforcement learning (RL) is a machine learning approach for teaching agents how to solve tasks by trial and error. Deep RL refers to the combination of RL with deep learning.

This module contains a variety of helpful resources, including:

  • a short introduction to RL terminology, kinds of algorithms, and basic theory,
  • an essay about how to grow into an RL research role,
  • a curated list of important papers organized by topic,
  • a well-documented code repo of short, standalone implementations of key algorithms,
  • and a few exercises to serve as warm-ups.

Get started at spinningup.openai.com!

Citing Spinning Up

If you reference or use Spinning Up in your research, please cite:

@article{SpinningUp2018,
    author = {Achiam, Joshua},
    title = {{Spinning Up in Deep Reinforcement Learning}},
    year = {2018}
}