-
Notifications
You must be signed in to change notification settings - Fork 203
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Documentation for Different Environments #44
Comments
Hi, I'm glad that you are interested in using sumo-rl! |
"Maybe describe the default definition of states and rewards?" |
I just updated the readme with the basic definitions, but I plan to add more details later! |
Hey, I just sat down and look at this. I've used who's fairly experienced in the RL (and I wanted to use these environments as part of a set of of many to test a general MARL algorithm I've been working on), but I'm not very experienced with traffic control/sumo so I have a few questions after reading: -What does |
Hey, I believe I have answered these question in this commit f0b387f. (Also fixed the dead links) Regarding the reward function, there is not really a standard in the literature.
I have seen many papers using Pressure as reward (but I didn't get better results with this):
|
Hey thanks a ton for that! A few more questions:
|
Ops, this "Obs:" means "Ps:" :P This means that when your action changes the phase, the env sets a yellow phase before actually setting the phase selected by the agent's action.
The nomenclature for traffic signal control can be a bit confusing. By green phase I mean a phase configuration presenting green (permissive) movements. The 4 actions in the readme are examples of 4 green phases.
Sure! I also intended to add more networks to the repository. |
Hello, I am really new to SUMO, but is there a way to deploy a trained agent with sumo-gui? I was able to run the example experiments/ql_2way-single-intersection.py and plot the results. Then I tried to run "python experiments/ql_2way-single-intersection.py -gui", which provided visualizations in sumo-gui, but the terminal window wasn't updating the step number (usually increments to 100,000) so I'm not sure if this is actually training and visualizing at the same time. In summary, I would like to know if I can save the trained agent, deploy it in an environment, and visualize it in sumo-gui. Also, when I use the "-gui" argument, is this still training the agent as it normally would if I ran "python experiments/ql_2way-single-intersection.py", but it just doesn't update the step number? I really appreciate your contributions, thank you! |
Hi, Using -gui only activates the SUMO GUI, there is no effect on the training procedure. |
@jkterry1 I just added network and route files from RESCO (check the readme). Basically, RESCO is a set of benchmarks for traffic signal control that was built on top of SUMO-RL. In their paper you can find results for different algorithms. |
Hey, it's been a week so I'm just following up on this :) |
Hey, I have just added an API to instantiate a few environments in the file https://github.com/LucasAlegre/sumo-rl/blob/master/sumo_rl/environment/resco_envs.py ! |
Hey,
I was planning to explore using a handful of these environments as a part of my research. However, unless I'm missing something, there's no explanation or visuals of the mechanics or behaviors of the different environments/maps? Is that the case, and if so would you be willing to take an hour to add it to the readme or something? It'd be super helpful for those potentially interested in your environment.
The text was updated successfully, but these errors were encountered: