-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
use xtal2png
with imagen-pytorch
and matbench-genmetrics
#204
Comments
do they decode to some reasonable materials? :D |
While I'm sure there's a lot to be done with the hyperparameters, I think I'll take another shot at running CDVAE for comparison. |
I am seeing coverage as 0 is it not concerning?. I my self have tried and got coverage as 0 with different variations of ddpm. |
@HarshaSatyavardhan thanks for the great question. Concerning - yes, though the idea of rediscovery is quite difficult. To make the point, see what the authors of PGCGM needed to do before "moving the needle" past 0 in https://www.nature.com/articles/s41524-023-01059-8: Notice how the first bar along the horizontal axis starts at 50*10000 = 500,000. Also, the coverage benchmark from matbench-genmetrics is even more difficult to succeed at than what PGCGM did because it uses time-based splits (i.e., not just can we discover something that was held out, but can we discover something "in the future" based only on training data before some calendar year).
Thank you for sharing this. Open to thoughts or suggestions you have. I think both xtal2png and the benchmarks themselves could be improved. Aside: matbench-genmetrics is under review at openjournals/joss-reviews#5618. |
matbench-genmetrics
is in a usable state now #12 (comment)I think
imagen-pytorch
can be used with TPU, but I'm not sure how much custom configuration is required https://github.com/sparks-baird/xtal2png/blob/main/notebooks/3.1-imagen-pytorch.ipynbI might just need to try it on Colab, switch to TPU, and see what happens. I think the latest versions uses 🤗 Accelerate library will make it easier to switch over. I'm unsure if I should focus more on hyperparameter tuning or just pick some reasonable defaults and train it for as long as seems reasonable (a week or two, for example). If going with my university HPC instead of TPU time, I can still do checkpointing in either case.
The text was updated successfully, but these errors were encountered: