Updated on 2022.01.01 DI-engine-v0.2.3 (beta)
DI-engine is a generalized decision intelligence engine. It supports various deep reinforcement learning algorithms (link):
- Most basic DRL algorithms, such as DQN, PPO, SAC, R2D2
- Multi-agent RL algorithms like QMIX, MAPPO
- Imitation learning algorithms (BC/IRL/GAIL) , such as GAIL, SQIL, Guided Cost Learning
- Exploration algorithms like HER, RND, ICM
- Offline RL algorithms: CQL, TD3BC
- Model-based RL algorithms: MBPO
DI-engine aims to standardize different RL enviroments and applications. Various training pipelines and customized decision AI applications are also supported.
- Traditional academic environments
- Real world decision AI applications
- DI-star: Decision AI in StarCraftII
- DI-drive: Auto-driving platform
- GoBigger: Multi-Agent Decision Intelligence Environment
- DI-smartcross: Decision AI in Traffic Light Control
- General nested data lib
- treevalue: Tree-nested data structure
- DI-treetensor: Tree-nested PyTorch tensor Lib
- Docs and Tutorials
- DI-engine-docs
- awesome-model-based-RL: A curated list of awesome Model-Based RL resources
DI-engine also has some system optimization and design for efficient and robust large-scale RL training:
- DI-orchestrator: RL Kubernetes Custom Resource and Operator Lib
- DI-hpc: RL HPC OP Lib
- DI-store: RL Object Store
Have fun with exploration and exploitation.
You can simply install DI-engine from PyPI with the following command:
pip install DI-engine
If you use Anaconda or Miniconda, you can install DI-engine from conda-forge through the following command:
conda install -c opendilab di-engine
For more information about installation, you can refer to installation.
And our dockerhub repo can be found here,we prepare base image
and env image
with common RL environments.
- base: opendilab/ding:nightly
- atari: opendilab/ding:nightly-atari
- mujoco: opendilab/ding:nightly-mujoco
- smac: opendilab/ding:nightly-smac
The detailed documentation are hosted on doc | 中文文档.
How to migrate a new RL Env | 如何迁移一个新的强化学习环境
Bonus: Train RL agent in one line code:
ding -m serial -e cartpole -p dqn -s 0
No | Algorithm | Label | Doc and Implementation | Runnable Demo |
---|---|---|---|---|
1 | DQN | DQN doc DQN中文文档 policy/dqn |
python3 -u cartpole_dqn_main.py / ding -m serial -c cartpole_dqn_config.py -s 0 | |
2 | C51 | policy/c51 | ding -m serial -c cartpole_c51_config.py -s 0 | |
3 | QRDQN | policy/qrdqn | ding -m serial -c cartpole_qrdqn_config.py -s 0 | |
4 | IQN | policy/iqn | ding -m serial -c cartpole_iqn_config.py -s 0 | |
5 | Rainbow | policy/rainbow | ding -m serial -c cartpole_rainbow_config.py -s 0 | |
6 | SQL | policy/sql | ding -m serial -c cartpole_sql_config.py -s 0 | |
7 | R2D2 | policy/r2d2 | ding -m serial -c cartpole_r2d2_config.py -s 0 | |
8 | A2C | policy/a2c | ding -m serial -c cartpole_a2c_config.py -s 0 | |
9 | PPO/MAPPO | policy/ppo | python3 -u cartpole_ppo_main.py / ding -m serial_onpolicy -c cartpole_ppo_config.py -s 0 | |
10 | PPG | policy/ppg | python3 -u cartpole_ppg_main.py | |
11 | ACER | policy/acer | ding -m serial -c cartpole_acer_config.py -s 0 | |
12 | IMPALA | policy/impala | ding -m serial -c cartpole_impala_config.py -s 0 | |
13 | DDPG/PADDPG | policy/ddpg | ding -m serial -c pendulum_ddpg_config.py -s 0 | |
14 | TD3 | policy/td3 | python3 -u pendulum_td3_main.py / ding -m serial -c pendulum_td3_config.py -s 0 | |
15 | D4PG | policy/d4pg | python3 -u pendulum_d4pg_config.py | |
16 | SAC | policy/sac | ding -m serial -c pendulum_sac_config.py -s 0 | |
17 | PDQN | policy/pdqn | ding -m serial -c gym_hybrid_pdqn_config.py -s 0 | |
18 | MPDQN | policy/pdqn | ding -m serial -c gym_hybrid_mpdqn_config.py -s 0 | |
19 | QMIX | policy/qmix | ding -m serial -c smac_3s5z_qmix_config.py -s 0 | |
20 | COMA | policy/coma | ding -m serial -c smac_3s5z_coma_config.py -s 0 | |
21 | QTran | policy/qtran | ding -m serial -c smac_3s5z_qtran_config.py -s 0 | |
22 | WQMIX | policy/wqmix | ding -m serial -c smac_3s5z_wqmix_config.py -s 0 | |
23 | CollaQ | policy/collaq | ding -m serial -c smac_3s5z_collaq_config.py -s 0 | |
24 | GAIL | reward_model/gail | ding -m serial_gail -c cartpole_dqn_gail_config.py -s 0 | |
25 | SQIL | entry/sqil | ding -m serial_sqil -c cartpole_sqil_config.py -s 0 | |
26 | DQFD | policy/dqfd | ding -m serial_dqfd -c cartpole_dqfd_config.py -s 0 | |
27 | R2D3 | R2D3中文文档 policy/r2d3 |
python3 -u pong_r2d3_r2d2expert_config.py | |
28 | Guided Cost Learning | reward_model/guided_cost | python3 lunarlander_gcl_config.py | |
29 | TREX | reward_model/trex | python3 mujoco_trex_main.py | |
30 | HER | reward_model/her | python3 -u bitflip_her_dqn.py | |
31 | RND | reward_model/rnd | python3 -u cartpole_ppo_rnd_main.py | |
32 | ICM | ICM中文文档 reward_model/icm |
python3 -u cartpole_ppo_icm_config.py | |
33 | CQL | policy/cql | python3 -u d4rl_cql_main.py | |
34 | TD3BC | policy/td3_bc | python3 -u mujoco_td3_bc_main.py | |
35 | MBPO | model/template/model_based/mbpo | python3 -u sac_halfcheetah_mopo_default_config.py | |
36 | PER | worker/replay_buffer | rainbow demo |
|
37 | GAE | rl_utils/gae | ppo demo |
means discrete action space, which is only label in normal DRL algorithms (1-18)
means continuous action space, which is only label in normal DRL algorithms (1-18)
means hybrid (discrete + continuous) action space (1-18)
means distributed training (collector-learner parallel) RL algorithm
means multi-agent RL algorithm
means RL algorithm which is related to exploration and sparse reward
means Imitation Learning, including Behaviour Cloning, Inverse RL, Adversarial Structured IL
means model-based RL algorithm
means other sub-direction algorithm, usually as plugin-in in the whole pipeline
P.S: The .py
file in Runnable Demo
can be found in dizoo
No | Environment | Label | Visualization | Code and Doc Links |
---|---|---|---|---|
1 | atari | code link env tutorial 环境指南 |
||
2 | box2d/bipedalwalker | dizoo link 环境指南 |
||
3 | box2d/lunarlander | dizoo link 环境指南 |
||
4 | classic_control/cartpole | dizoo link 环境指南 |
||
5 | classic_control/pendulum | dizoo link 环境指南 |
||
6 | competitive_rl | dizoo link 环境指南 |
||
7 | gfootball | dizoo link 环境指南 |
||
8 | minigrid | dizoo link 环境指南 |
||
9 | mujoco | dizoo link 环境指南 |
||
10 | multiagent_particle | dizoo link 环境指南 |
||
11 | overcooked | dizoo link env tutorial |
||
12 | procgen | dizoo link 环境指南 |
||
13 | pybullet | dizoo link 环境指南 |
||
14 | smac | dizoo link 环境指南 |
||
15 | d4rl | dizoo link 环境指南 |
||
16 | league_demo | dizoo link | ||
17 | pomdp atari | dizoo link | ||
18 | bsuite | dizoo link env tutorial |
||
19 | ImageNet | dizoo link 环境指南 |
||
20 | slime_volleyball | dizoo link env tutorial 环境指南 |
||
21 | gym_hybrid | dizoo link 环境指南 |
||
22 | GoBigger | opendilab link env tutorial 环境指南 |
||
23 | gym_soccer | dizoo link 环境指南 |
||
24 | multiagent_mujoco | dizoo link 环境指南 |
||
25 | classic_control/bitflip | dizoo link 环境指南 |
means hybrid (discrete + continuous) action space
means multi-agent RL environment
means environment which is related to exploration and sparse reward
means Imitation Learning or Supervised Learning Dataset
means environment that allows agent VS agent battle
P.S. some enviroments in Atari, such as MontezumaRevenge, are also sparse reward type
- File an issue on Github
- Open or participate in our forum
- Discuss on DI-engine slack communication channel or QQ group (700157520)
- Contributes to our future plan Roadmap
We appreciate all the feedbacks and contributions to improve DI-engine, both algorithms and system designs. And CONTRIBUTING.md
offers some necessary information.
@misc{ding,
title={{DI-engine: OpenDILab} Decision Intelligence Engine},
author={DI-engine Contributors},
publisher = {GitHub},
howpublished = {\url{https://github.com/opendilab/DI-engine}},
year={2021},
}
DI-engine released under the Apache 2.0 license.