A fork of gym-carla: an OpenAI gym third party environment for the CARLA simulator.
- Ubuntu 20.04
- +32 GB RAM memory
- NVIDIA RTX 3070 / NVIDIA RTX 3080 / NVIDIA RTX 4090
- Install CARLA 0.9.15 release.
mkdir -p /opt/carla-simulator
cd /opt/carla-simulator
wget https://tiny.carla.org/carla-0-9-15-linux
tar -xvzf carla-0-9-15-linux
rm carla-0-9-15-linux
- Install client library
export PYTHONPATH=$PYTHONPATH:/opt/carla-simulator/PythonAPI/carla/dist/carla-0.9.15-py3.7-linux-x86_64.egg
If you have previously installed the client library with pip, this will take precedence over the .egg file. You will need to uninstall the previous library first.
- Setup conda environment
conda create -n env_name python=3.7
conda activate env_name
- Clone this git repo in an appropriate folder
git clone https://github.com/montrealrobotics/gym-carla.git
- Enter the repo root folder and install the packages:
cd gym-carla
pip install -r requirements.txt
pip install -e .
bash /opt/carla-simulator/CarlaUE4.sh -fps=10 -quality-level=Epic -carla-rpc-port=4000 -RenderOffScreen
Follow instructions in the README here
- We provide a dictionary observation including birdeye view semantic representation (obs['birdeye']) using a customized fork of the repository carla-birdeye-view:
-
The termination condition is either the ego vehicle collides, runs out of lane, reaches a destination, or reaches the maximum episode timesteps. Users may modify function _terminal in carla_env.py to enable customized termination condition.
-
The reward is a weighted combination of longitudinal speed and penalties for collision, exceeding maximum speed, out of lane, large steering and large lateral accleration. Users may modify function _get_reward in carla_env.py to enable customized reward function.