You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As Neural-LAM grows and we make changes to it there is an increasing need for documentation. Already now the README is quite long, and some things are not that easy to find in it. I propose that we should set up a more robust and structured documentation solution. The idea of this issue is to start a discussion about how to do this.
When thinking about documentation for different open source python project, one that I really like is the pytorch geometric documentation. This uses Read the docs + Sphinx. In particular, you can create a documentation that contains both:
Manually written tutorials, quickstart-guides, installation instructions, etc.
A full API reference for all classes and functions in the codebase (descriptions + arguments + what is returned), generated directly from the docstrings in the python code.
I think having both of these is desirable, and think we could set up something like this for Neural-LAM.
I know that in weather-model-graphs you have these nice Jupyter books for documentation @leifdenby. Is there a way to do these things in those as well? In particular, is it possible to do the automatic reference generation from docstrings?
The text was updated successfully, but these errors were encountered:
When thinking about documentation for different open source python project, one that I really like is the pytorch geometric documentation. This uses Read the docs + Sphinx. In particular, you can create a documentation that contains both:
1. Manually written tutorials, quickstart-guides, installation instructions, etc.
2. A full API reference for all classes and functions in the codebase (descriptions + arguments + what is returned), generated directly from the docstrings in the python code.
Agreed! This sounds great @joeloskarsson. I had in particular thought that around preparing datasets might be a nice thing to detail. And discuss the code structure overall too so people know where to go and change things.
I know that in weather-model-graphs you have these nice Jupyter books for documentation @leifdenby. Is there a way to do these things in those as well? In particular, is it possible to do the automatic reference generation from docstrings?
Thanks :) Jupyter books actually build on top of sphinx, so it might be possible to combine the use of notebooks with automatically generated docs from static code analysis with sphinx. I can look into that if that would be helpful?
Feel free to look into it @leifdenby, and post your findings here. I plan to also dig deeper into this, to learn a bit more about how the reference generation from docstrings actually works.
As Neural-LAM grows and we make changes to it there is an increasing need for documentation. Already now the README is quite long, and some things are not that easy to find in it. I propose that we should set up a more robust and structured documentation solution. The idea of this issue is to start a discussion about how to do this.
When thinking about documentation for different open source python project, one that I really like is the pytorch geometric documentation. This uses Read the docs + Sphinx. In particular, you can create a documentation that contains both:
I think having both of these is desirable, and think we could set up something like this for Neural-LAM.
I know that in weather-model-graphs you have these nice Jupyter books for documentation @leifdenby. Is there a way to do these things in those as well? In particular, is it possible to do the automatic reference generation from docstrings?
The text was updated successfully, but these errors were encountered: