-
Notifications
You must be signed in to change notification settings - Fork 67
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Weak form of a PDE #174
Comments
Hey @LoveFrootLoops thank you for the suggestion. With the maintainers we are thinking to include variational losses (weak formulation), but we are still not sure how is the best way to do it. In the meanwhile, consider that we made a |
Hey @LoveFrootLoops did you manage to work it out?😊 |
Hello again @LoveFrootLoops, at the end did you manage to work it out? It would be fantastic to have a tutorial on it! We can chat about it, let me know :) |
Hi @dario-coscia, I haven’t progressed with the project since I found it challenging to keep up with the constant updates to the packages. Each time I attempted to update to a newer version, I encountered issues that prevented anything from working properly. Consequently, I’ve put a hold on using it until it stabilizes. I would appreciate an update from you regarding the version you’d recommend and whether it’s currently stable. Regarding the implementation, I have been considering a potential solution. Given that the training process is batch-oriented, one possible approach could be to modify the on_epoch_end function to calculate the integral. This would align the calculation with the completion of an epoch rather than on a per-batch basis, which could be a more feasible integration point for the functionality I need. |
Hi @LoveFrootLoops we plan to release the v0.1 version this November as stable version, I will update you when the release is ready. We made new documentation with more details, updated all tutorials and examples, and introduced the Neural Operator learning framework (MIONet, DeepOnet, Fourier Neural Operator, and more to come..). Also, we plan to make a mailing list for University users, Students, and developers where we will put all the (minor) updates in the package. I will work out after the release to provide a tutorial on weak PINN. After looking in more details on VPINN and WPINN, the best is to define a new solver inheriting from |
@dario-coscia, that sounds impressive. Well done! I do have a query about the loss_phys function. Is it assessed solely at a batch of collocation points? It so, how would you evaluate an integral over the complete domain? |
It depends on the |
@dario-coscia, I believe there might be an issue because you're looking to evaluate the local losses at batches of collocation points, whereas the integral losses need to be evaluated over the entire domain. Just as a recommendation, I would distinguish between local and global losses. Global losses can be evaluated at each epoch end (on_epoch_end) while local losses should be trained for batches i.e. training steps in the epoch (training_step). |
Hi @LoveFrootLoops we merged the new version as well as making a complete new documentation. I do not understand what you are saying. If batch size is None all points sampled for training are used to evaluate the loss. In that case 1batch iteration = 1epoch iteration. Anyway, would you like to start a draft PR in which we can collaborate to make up a simple tutorial using VPINN or WPINN? This way I can help you with the problem directly on the code. Let me know |
Hi @LoveFrootLoops 👋🏻 and happy new year! Just checking if you managed the implementation of the integral loss function using |
Hi @dario-coscia, Happy New Year! Just wanted to let you know that I got the integral working with torchquad. I'm thinking of setting up a benchmark problem, but things are a bit hectic right now. Should have some time around mid-February to sort that out. About the PyTorch Lightning thing – yeah, it's all about batches and training steps. We'll need to figure out how to handle all the collocation points in each batch with something like a |
Wow! That's super you managed to do it 🚀🚀 Yeah, it would be great to do a tutorial for the integral loss using |
Hi! I'm planning to implement VPINNs in #263 by introducing a variational loss class. Could it be useful? |
Hi! I think @LoveFrootLoops already implemented a sort of Variational PINN using a variational Loss, maybe we can all work together to introduce Variational Losses in PINA? |
Hi again :)
I'd like to bring up a topic that's been keeping me busy lately in the field of Physics-Informed Neural Networks (PINNS). During research I've come across a number of papers that discuss how accurate and effective PINNS are, particularly when dealing with the strong form of a Partial Differential Equation (PDE). Interestingly, these papers reveal that in most cases PINNS don't perform well and might even give us incorrect results. However, they also highlight that PINNS work really well when it comes to weak forms of PDEs.
Just as an example take a look at page 6 of Paper1
Another example can be seen here Paper2 where they had to add an integration formulation of the pde to the strong form to keep the global consistency.
In light of these observations, I would like to propose the incorporation of an integration method. Such an addition would enable the formulation of the loss function in weak form, facilitating integration across the entire domain (effectively summing over collection points). By doing this, you could really enhance the performance of PINNS, bringing them up to date with the latest techniques, particularly in accurately solving PDEs using Neural Networks.
The text was updated successfully, but these errors were encountered: