Thank you for investing your time in contributing to GPUDrive! 🚗✨ We want to make contributing to this project as easy and transparent as possible, whether it's:
- Reporting a bug
- Discussing the current state of the code
- Submitting a fix
- Proposing new features
- Becoming a maintainer
We use Github Flow, so all code changes happen through pull requests
Pull requests are the best way to propose changes to the codebase. We actively welcome your pull requests:
- Fork the repo and create your branch from
main
. - If you've added code that should be tested, add tests.
- If you've changed APIs, update the documentation.
- Ensure the test suite passes.
- Make sure your code lints.
- Issue that pull request!
Report bugs 🐛 using Github's issues
We use GitHub issues to track public bugs. Report a bug by opening a new issue; it's that easy!
Here's an example bug report, you can use as model and here is a useful template.
Great Bug Reports tend to have:
- A quick summary and/or background
- Steps to reproduce
- Be specific!
- Give sample code if you can. This stackoverflow question includes sample code that anyone with a base R setup can run to reproduce what I was seeing
- What you expected would happen
- What actually happens
- Notes (possibly including why you think this might be happening, or stuff you tried that didn't work)
People love thorough bug reports. I'm not even kidding.
Maybe you made some changes and want to make sure learning is working as intended. To do this, follow these steps:
- Step 1: Make sure you have a wandb account.
- Step 2: Run this out of the box, the only thing you might want to change is the "device" (if you encounter problems, please report the 🐛!):
python baselines/ippo/ippo_sb3.py
This should kick off a run that takes about 15-20 minutes to complete on a single gpu. We’re using Independent PPO (IPPO) to train a number of agents distributed across 3 traffic scenarios. For an example of what a "healthy" run looks like, I ran the script above with these exact settings in baselines/ippo/config.py
on 08/19/2024
and created a wandb report with complete logs and videos:
$$
$$
🗂️ Running your test with more scenarios
Sometimes 3 scenarios is not enough to test your code. If you want to run your test with more scenarios:
- Download the dataset (see README)
- Update
selection_discipline = SelectionDiscipline.K_UNIQUE_N
inbaselines/ippo/config/ippo_ff_sb3.yaml
For example, to use 10 different scenarios, we can run:
python baselines/ippo/ippo_sb3.py --data_dir='<your_data_path>' --render_n_worlds=10 --k_unique_scenes=10 --total_timesteps=15_000_000
🔎 Checkout the wandb report here
If you have the suspicion that something might be broken, or are just looking for a good sanity check, compare your metrics with the runs in the report above. Do they all look similar? Then everything seems to be working fine. If a metric looks off, maybe give your code another look. Are your agents learning better/faster? That’s interesting - let us know why!
By contributing, you agree that your contributions will be licensed under its MIT License.
This document was adapted from the open-source contribution guidelines for Facebook's Draft and from the Transcriptase adapted version