-
Notifications
You must be signed in to change notification settings - Fork 223
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Low accuracy #19
Comments
You need to tune the hyper-parameters. The default ones are one that is used in the paper. See the papers for the details. |
I just suggest everyone stop using these old datasets! The Open Graph Benchmark (https://ogb.stanford.edu/) offers much better datasets, where GIN and more advanced models have been extensively benchmarked. |
Can you provide hyper-parameters for these datasets used in the paper? In ogb, these datasets doesn't exist. |
I did not record them :( |
The hyper-parameters we tune for each dataset are: (1) the number of hidden units ∈ {16, 32} for For social networks we create node features as follows: for the REDDIT datasets, we set all node feature vectors to be the same (thus, features here are uninformative); for the other social graphs, we use one-hot encodings of node degrees ( |
Thanks a lot! I will try it. |
Hi:
I use your published of this paper, I can't reproduce the result. For example, MUTAG, test accuracy is very low, about 70 percent while train accuracy is near to 1. There occurs overfitting I think. Do you meet this before and how to fix it ?
The text was updated successfully, but these errors were encountered: