-
Notifications
You must be signed in to change notification settings - Fork 84
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feat] Adding GLOP model #182
base: main
Are you sure you want to change the base?
Conversation
Co-authored-by: Furffico <[email protected]>
After some experiments with the original GLOP implementation, I found that none of the current discrepancies (Maximum number of vehicles constraint; Polar coordinates embedding; Sparsify the input graph; ...) should be the reason for training failure on CVRP100. It's weird @Furffico |
Could you push the latest version of GLOP? |
We were experimenting with the debug version, which involves a part of code from the official GLOP implementation. The "pure RL4CO" version remains not learning and still requires some debugging. |
I see, given it's still in RL4CO (just not totally refactored) I'd suggest merging it and when the pure RL4CO version is ready that will be merged. What do you think? Cc: @cbhua |
As the GLOP worked at the submission version. We will clean up this branch and push a clean final implementation of it then. |
Description
This PR adds the implementation of Global and Local Optimization Policies (GLOP), together with the implementation of Shortest Hamiltonian Path Problem (SHPP) environment.
Motivation and Context
GLOP is an important non-autoregressive (NAR) model for routing problems. For more details, please refer to the original paper.
Types of changes
Checklist
The current implementation of GLOP is runnable but it can not learn.
I added one test notebook at the
examples/other/3-glop.ipynb
. This notebook including the test for SHPP environment, greedy rollout for untrained GLOP policy (including visualization for a better understanding), and launching the training for the GLOP. Please play with it and have a look.There are following components not implemented yet compare with the original GLOP:
I will add these missing parts soon. And here are some possible ideas to help to reproduce the results:
If @henry-yeh @Furffico have time, could you help to have a look about the implementation? We need to reproduce the GLOP's result close recently.