-
Notifications
You must be signed in to change notification settings - Fork 760
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issues with multi-agent settings #597
Comments
As far as I can see, it is necessary to implement a separate multi-agent version of the single agent |
Hey,when trying to run my highway script on multi-agent settings, I run into this error: AssertionError: The algorithm only supports (<class 'gymnasium.spaces.discrete.Discrete'>,) as action spaces but Tuple(Discrete(5), Discrete(5)) was provided" Did you encounter the same error too? How did you solve the issue?
|
This looks like a separate issue. You should check the algorithm that you are using. The RL algorithm (from stable baselines 3) that you are using seem to support only single agent. Either you need to modify the algorithm for multi-agent settings or use the multi-agent version of the RL algorithms available in ray-rllib or other alternatives options to train multiple agents. |
Dear author, I am implementing the Multi-agent settings using the Highway-v0. I am not able to achieve stable training and the vehicles can run off the roads without terminating the environment. I took a look at the codes, in the reward function
HighwayEnv/highway_env/envs/highway_env.py
Lines 117 to 135 in 7415379
and terminate function
HighwayEnv/highway_env/envs/highway_env.py
Lines 136 to 142 in 7415379
It seems Only the self.vehicle is considered instead of self.controlled_vehicles. Any thoughts would be appreciated.
The text was updated successfully, but these errors were encountered: