garage is a toolkit for developing and evaluating reinforcement learning algorithms, and an accompanying library of state-of-the-art implementations built using that toolkit.
The toolkit provides wide range of modular tools for implementing RL algorithms, including:
- Composable neural network models
- Replay buffers
- High-performance samplers
- An expressive experiment definition interface
- Tools for reproducibility (e.g. set a global random seed which all components respect)
- Logging to many outputs, including TensorBoard
- Reliable experiment checkpointing and resuming
- Environment interfaces for many popular benchmark suites
- Supporting for running garage in diverse environments, including always up-to-date Docker containers
See the latest documentation for getting started instructions and detailed APIs.
pip install --user garage
Join the garage-announce mailing list for infrequent updates (<1/mo.) on the status of the project and new releases.
Need some help? Want to ask garage is right for your project? Have a question which is not quite a bug and not quite a feature request?
Join the community Slack by filling out this Google Form.
The table below summarizes the algorithms available in garage.
Algorithm | Framework(s) |
---|---|
CEM | numpy |
CMA-ES | numpy |
REINFORCE (a.k.a. VPG) | PyTorch, TensorFlow |
DDPG | PyTorch, TensorFlow |
DQN | TensorFlow |
DDQN | TensorFlow |
ERWR | TensorFlow |
NPO | TensorFlow |
PPO | PyTorch, TensorFlow |
REPS | TensorFlow |
TD3 | TensorFlow |
TNPG | TensorFlow |
TRPO | PyTorch, TensorFlow |
MAML | PyTorch |
RL2 | TensorFlow |
PEARL | PyTorch |
SAC | PyTorch |
MTSAC | PyTorch |
MTPPO | PyTorch, TensorFlow |
MTTRPO | PyTorch, TensorFlow |
Task Embedding | TensorFlow |
Behavioral Cloning | PyTorch |
garage requires Python 3.6+. If you need Python 3.5 support, the last garage release to support Python 3.5 was v2020.06.
The package is tested on Ubuntu 18.04. It is also known to run on Ubuntu 16.04, 18.04, and 20.04, and recent versions of macOS using Homebrew. Windows users can install garage via WSL, or by making use of the Docker containers.
We currently support PyTorch and
TensorFlow for implementing the neural network
portions of RL algorithms, and additions of new framework support are always
welcome. PyTorch modules can be found in the package
garage.torch
and TensorFlow modules can be found in the package
garage.tf
.
Algorithms which do not require neural networks are found in the package
garage.np
.
The package is available for download on PyPI, and we ensure that it installs successfully into environments defined using conda, Pipenv, and virtualenv.
The most important feature of garage is its comprehensive automated unit test and benchmarking suite, which helps ensure that the algorithms and modules in garage maintain state-of-the-art performance as the software changes.
Our testing strategy has three pillars:
- Automation: We use continuous integration to test all modules and algorithms in garage before adding any change. The full installation and test suite is also run nightly, to detect regressions.
- Acceptance Testing: Any commit which might change the performance of an algorithm is subjected to comprehensive benchmarks on the relevant algorithms before it is merged
- Benchmarks and Monitoring: We benchmark the full suite of algorithms against their relevant benchmarks and widely-used implementations regularly, to detect regressions and improvements we may have missed.
Release | Build Status | Last date of support |
---|---|---|
v2020.06 | February 28th, 2021 | |
v2019.10 | October 31st, 2020 |
Garage releases a new stable version approximately every 4 months, in February, June, and October. Maintenance releases have a stable API and dependency tree, and receive bug fixes and critical improvements but not new features. We currently support each release for a window of 8 months.
If you use garage for academic research, please cite the repository using the
following BibTeX entry. You should update the commit
field with the commit or
release tag your publication uses.
@misc{garage,
author = {The garage contributors},
title = {Garage: A toolkit for reproducible reinforcement learning research},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/rlworkgroup/garage}},
commit = {be070842071f736eb24f28e4b902a9f144f5c97b}
}
The original code for garage was adopted from predecessor project called rllab. The garage project is grateful for the contributions of the original rllab authors, and hopes to continue advancing the state of reproducibility in RL research in the same spirit.
rllab was developed by Rocky Duan (UC Berkeley/Covariant), Peter Chen (UC Berkeley/Covariant), Rein Houthooft (UC Berkeley/Happy Elements), John Schulman (UC Berkeley/OpenAI), and Pieter Abbeel (UC Berkeley/Covariant).