diff --git a/docs/user_guide/4.5Optimization.md b/docs/user_guide/4.5Optimization.md index 598be31de..2b6ede4cc 100644 --- a/docs/user_guide/4.5Optimization.md +++ b/docs/user_guide/4.5Optimization.md @@ -2,7 +2,7 @@ ## 1. Theory -Automatic differentiation is fundamental to DMFF and aids in neural network optimization. During training, it computes the derivatives from output to input using backpropagation, optimizing parameters through gradient descent. With its efficiency in optimizing high-dimensional parameters, this technique isn't limited to neural networks but suits any framework following the "input parameters → model computation → output" sequence, such as molecular dynamics (MD) simulations. Hence, using automatic differentiation and referencing experimental or ab initio data, we can optimize force field parameters by computing the output's derivative with respect to input parameters. +Automatic differentiation is a crucial component of DMFF and plays a significant role in optimizing neural networks. This technique computes the derivatives of output with respect to input using backpropagation, so parameters optimization can be conducted using gradient descent algorithms. With its efficiency in optimizing high-dimensional parameters, this technique is not limited to training neural networks but is also suitable for any physical model optimization (i.e., molecular force field in the case of DMFF). A typical optimization recipe needs two key ingradients: 1. gradient evaluation, which can be done easily using JAX; and 2. an optimizer that takes gradient as inputs, and update parameters following certain optimization algorithm. To help the users building optimization workflows, DMFF provides an wrapper API for optimizers implemented in [Optax](https://github.com/google-deepmind/optax), which is introduced here. ## 2. Function module