This repo contains a refactored version of PPO from stable-baselines3
- Minor changes to improve performance
- Minor changes to mirror OpenAI baselines ppo2 implementation
- Configs necessary to run experiments on mujoco and atari environments with default OpenAI baselines parameters