Skip to content

Commit

Permalink
[oneD] addressing review comments
Browse files Browse the repository at this point in the history
  • Loading branch information
wandadars authored and speth committed Aug 18, 2024
1 parent 2285373 commit 06261ed
Show file tree
Hide file tree
Showing 5 changed files with 134 additions and 115 deletions.
89 changes: 75 additions & 14 deletions doc/sphinx/reference/onedim/discretization.md
Original file line number Diff line number Diff line change
Expand Up @@ -80,29 +80,87 @@ one-sided form which uses a stencil that pulls from points to right of a grid po
For consistency, the radial momentum term, ($2\rho V$), also uses the same points and
averages the value between two nodes.

There are three 1D flame configurations: free flames, strained flames, and burner stabilized
flames. The free flames have a constant mass flow rate that is set by the user. The burner
stabilized flames solve for a mass flow rate that achieves a desired temperature at a
point in the domain. The strained flames have a mass flow rate that is set by the user and
varies with axial position.


#### Strained flames

The discretized equation in residual form (all terms moved to one side) at the interior
points in the domain is given below.
points in the domain for strained flames is given below.

$$
F_{u,j} = \frac{\rho_j u_j - \rho_{j+1} u_{j+1}}{z_{j+1} - z_j} +
2 \left( \frac{\rho_j V_j + \rho_{j+1} V_{j+1}}{2} \right)
$$

#### Free flames

The discretized equation in residual form at the interior points in domain for free
flames is given below.

$$
F_{u,j} =
\begin{cases}
-\dfrac{\rho_j u_j - \rho_{j-1} u_{j-1}}{z_j - z_{j-1}}, & \text{if } z_j > z_{\text{fixed}} \\[10pt]
T_j - T_{\text{fixed}}, & \text{if } z_j = z_{\text{fixed}} \\[10pt]
-\dfrac{\rho_{j+1} u_{j+1} - \rho_j u_j}{z_{j+1} - z_j}, & \text{if } z_j < z_{\text{fixed}}
\end{cases}
$$

Where $z_{\text{fixed}}$ is the axial position where the temperature is fixed. The value
of $T_{\text{fixed}}$ is the desired temperature at the fixed axial position.

#### Unstrained flames

The discretized equation in residual form at the interior points in the domain for
unstrained flames is given below.

$$
F_{u,j} = \frac{\rho_j u_j - \rho_{j+1} u_{j+1}}{z_{j+1} - z_j}
$$

This is a zero gradient condition for the mass flow i.e. the mass flow rate is
constant.


### Boundary Conditions

At the right boundary, the default boundary condition is a zero velocity condition,
which is representative of a stagnation surface.
The boundary conditions are expressed in a residual form in the Cantera solver. This
allows for complex expressions to be used at the boundaries. The solver will attempt to
find the solution such that the residual at the boundary goes to zero.

#### Right Boundary

At the right boundary, there are a few default boundary conditions depending on the
type of flow.

##### Strained Flames

For strained flames, the default right boundary condition is a zero mass
flow rate.

At the right boundary ($j=N$):

$$
F_{u,j} = \rho_j u_j - \dot m
F_{u,j} = \rho_j u_j
$$

Expressing the boundary residual in this form will drive the Newton root finding
algorithm to find the value of $ \rho_j u_j $ that minimizes the difference between it
and the imposed (from a user-defined boundary mass flow rate specification, for
example) boundary condition.
##### Unstrained Flames

For unstrained flames, the default right boundary condition is a zero axial velocity
gradient.

At the right boundary ($j=N$):

$$
F_{u,j} = \frac{\rho_j u_j - \rho_{j-1} u_{j-1}}{z_j - z_{j-1}}
$$

#### Left Boundary

There is no imposed boundary condition at the left boundary because only one boundary
condition can be enforced for a first-order differential equation. As such, the
Expand Down Expand Up @@ -148,7 +206,7 @@ The upwinding formula for the radial velocity derivative term

$$
\left( \rho u \frac{\partial V}{\partial z} \right) \bigg|_{j} \approx
\rho_j u_j \frac{V_{j_{\ell}} - V_{j_{\ell-1}}}{z_{\ell} - z_{\ell-1}}
\rho_j u_j \frac{V_{\ell} - V_{\ell-1}}{z_{\ell} - z_{\ell-1}}
$$

Where the value of $\ell$ is determined by the sign of the axial velocity $u$. If the
Expand All @@ -165,7 +223,7 @@ $$
\frac{d}{dz}\left(\mu \frac{dV}{dz}\right)
$$

For simplicity, let $ A = \mu \frac{dV}{dz} $ for simplicity. This will be called the
Let $ A = \mu \frac{dV}{dz} $ for simplicity. This will be called the
inner term. In this situation, the inner term is evaluated using a central difference
formula, but instead of using the `j+1` and `j-1` points, the derivative is estimated
using `j+1/2` and `j-1/2` (halfway between the grid points around point j).
Expand Down Expand Up @@ -275,12 +333,15 @@ The discretized equation in residual form (all terms moved to one side) at the i
points in the domain is given below.

$$
F_{T,j} = -\rho_j c_p u_j \left( \frac{T_{\ell} -
\begin{align*}
F_{T,j} = & -\rho_j c_p u_j \left( \frac{T_{\ell} -
T_{\ell-1}}{z_{\ell} - z_{\ell-1}} \right) +
\frac{\lambda_{j+1/2} \frac{T_{j+1} - T_j}{z_{j+1} - z_j} -
\lambda_{j-1/2} \frac{T_j - T_{j-1}}{z_j - z_{j-1}}}{\frac{z_{j+1} - z_{j-1}}{2}} -
\sum_k j_{k, j} \left( \frac{h_{k, \ell} -
h_{k, \ell-1}}{z_{\ell} - z_{\ell-1}} \right)
\lambda_{j-1/2} \frac{T_j - T_{j-1}}{z_j - z_{j-1}}}{\frac{z_{j+1} - z_{j-1}}{2}} \\
& - \sum_k j_{k, j} \left( \frac{h_{k, \ell} -
h_{k, \ell-1}}{z_{\ell} - z_{\ell-1}} \right) - \sum_k h_{k, j} W_k \dot \omega_{k, j}
\end{align*}
$$

The enthalpy gradient term uses upwinding.
Expand Down
118 changes: 38 additions & 80 deletions doc/sphinx/reference/onedim/nonlinear-solver.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,68 +62,20 @@ matrix is used to determine the direction of the correction vector that will dri
solution towards zero error.

$$
J(x) =
\frac{\partial F(x)}{\partial x}
J(x) =\pxpy{F(x)}{x}
$$

For the vector-value residual vector $F(x)$, the Jacobian matrix looks like,
For the vector-value residual $F(x)$, the Jacobian matrix has elmenets that are
partial derivatives of the residuals with respect to each solution component at each
grid point.

$$
\begin{pmatrix}
\frac{\partial F_{U,0}(x)}{\partial U_0} &
\frac{\partial F_{U,0}(x)}{\partial V_0} &
\frac{\partial F_{U,0}(x)}{\partial T_0} &
\frac{\partial F_{U,0}(x)}{\partial Y_{m,0}} &
\cdots &
\frac{\partial F_{U,0}(x)}{\partial U_N} &
\frac{\partial F_{U,0}(x)}{\partial V_N} &
\frac{\partial F_{U,0}(x)}{\partial T_N} &
\frac{\partial F_{U,0}(x)}{\partial Y_{m,N}}
\\
\frac{\partial F_{V,0}(x)}{\partial U_0} &
\frac{\partial F_{V,0}(x)}{\partial V_0} &
\frac{\partial F_{V,0}(x)}{\partial T_0} &
\frac{\partial F_{V,0}(x)}{\partial Y_{m,0}} &
\cdots &
\frac{\partial F_{V,0}(x)}{\partial U_N} &
\frac{\partial F_{V,0}(x)}{\partial V_N} &
\frac{\partial F_{U,0}(x)}{\partial T_N} &
\frac{\partial F_{V,0}(x)}{\partial Y_{m,N}}
\\
\vdots & \vdots & \vdots & \vdots & & \vdots & \vdots & \vdots & \vdots
\\
\frac{\partial F_{U,N}(x)}{\partial U_0} &
\frac{\partial F_{U,N}(x)}{\partial V_0} &
\frac{\partial F_{U,N}(x)}{\partial T_0} &
\frac{\partial F_{U,N}(x)}{\partial Y_{m,0}} &
\cdots &
\frac{\partial F_{U,N}(x)}{\partial U_N} &
\frac{\partial F_{U,N}(x)}{\partial V_N} &
\frac{\partial F_{U,N}(x)}{\partial T_N} &
\frac{\partial F_{U,N}(x)}{\partial Y_{m,N}}
\\
\frac{\partial F_{V,N}(x)}{\partial U_0} &
\frac{\partial F_{V,N}(x)}{\partial V_0} &
\frac{\partial F_{V,N}(x)}{\partial T_0} &
\frac{\partial F_{V,N}(x)}{\partial Y_{m,0}} &
\cdots &
\frac{\partial F_{V,N}(x)}{\partial U_N} &
\frac{\partial F_{V,N}(x)}{\partial V_N} &
\frac{\partial F_{V,N}(x)}{\partial T_N} &
\frac{\partial F_{V,N}(x)}{\partial Y_{m,N}}
\end{pmatrix}
J_{i,j} = \frac{\partial F_i(x)}{\partial x_j}
$$

This is approximated numerically in the 1D solver instead of having analytical
Moving across a row of the Jacobian matrix encodes how the value of the residual at
a grid point changes with respect to each solution component. The Jacobian is
approximated numerically in the 1D solver instead of having analytical
relations derived for each governing equation.

### Damped Newton Method
Expand All @@ -132,7 +84,7 @@ The damped Newton method starts with an initial guess for the solution, $x^{(0)}
performs a series of iterations until the solution converges with the help of a
damping parameter.

For each iteration, k, the solution is updated using the following relation:
For each iteration, $k$, the solution is updated using the following relation:

$$
J(x^{(k)}) \left( x^{(k+1)} - x^{(k)} \right) = -\lambda^{(k)} F(x^{(k)})
Expand Down Expand Up @@ -162,13 +114,13 @@ parameter is selected to satisfy two conditions:
2. The norms of succeeding undamped steps decrease in magnitude.

The following image visually illustrates the damped Newton method. In it, the undamped
Newton step \( $ \Delta x^{(k)} $ \) is shown. The second vector is the undamped step
Newton step $ \Delta x^{(k)} $ is shown. The second vector is the undamped step
that would occur if the full initial step were to be taken. The shorter vector
representing the damped correction \( $ \lambda^{(k)} \Delta x^{(k)} $ \) is a trial
step. The method takes this trial step to get to a new solution at \( $ x^{(k+1)} $ \).
representing the damped correction $ \lambda^{(k)} \Delta x^{(k)} $ is a trial
step. The method takes this trial step to get to a new solution at $ x^{(k+1)} $.
A new step vector is then computed using the trial solution, which gives
$ \Delta x^{(k+1)} $. The length of this vector is compared to the length of the
original undamped step \( $ \Delta x^{(k)} $ \). If the length of the new step is less
original undamped step $ \Delta x^{(k)} $. If the length of the new step is less
than the length of the original step, then the trial step is accepted, and the damping
value is accepted. The step is then taken, and the process is repeated until the
solution converges.
Expand All @@ -184,33 +136,33 @@ Representation of the damped Newton method. Adapted from {cite:t}`kee2003`.
For a more mathematical representation of the damped Newton method, we consider:

$$
x^{(m+1)} = x^{(m)} + \lambda^{(m)} \Delta x^{(m)}
x^{(k+1)} = x^{(k)} + \lambda^{(k)} \Delta x^{(k)}
$$

A value of $\lambda$ needs to be picked that satisfies:

$$
|\Delta x^{(m+1)}| < |\Delta x^{(m)}|
|\Delta x^{(k+1)}| < |\Delta x^{(k)}|
$$

Where:

$$
\Delta x^{(m)} = J(x^{(m)})^{-1} F(x^{(m)})
\Delta x^{(k)} = J(x^{(k)})^{-1} F(x^{(k)})
$$

and,

$$
\Delta x^{(m+1)} = J(x^{(m)})^{-1} F(x^{(m+1)})
\Delta x^{(k+1)} = J(x^{(k)})^{-1} F(x^{(k+1)})
$$

During the search for the correct value of $\lambda$, the value of $\lambda$ starts
at 1, it is adjusted down to a value that keeps the solution within the trust region.
The process then begins for finding $\lambda$, failures result in the damping factor
being reduced by a constant factor. The current factor in Cantera is the $\sqrt{2}$.

During the damped Newton method, the Jacobian is kept at the $x^{(m)}$ value. This
During the damped Newton method, the Jacobian is kept at the $x^{(k)}$ value. This
sometimes can cause issues with convergence if the Jacobian becomes out of date
(due to sensitivity to the solution). In Cantera, the Jacobian is updated if too many
attempted steps fail to take any damped step. The balance of how long to wait before
Expand All @@ -221,11 +173,11 @@ cost of failing to converge.

As was discussed earlier, the Newton method is an iterative method, and it's important
to assess when the method has reached a point where the iterations can be stopped. This
point is called convergence. Cantera's implementation uses a **weighted norm** of the
point is called convergence. Cantera's implementation uses a *weighted norm* of the
step vector to determine convergence, rather than a simple absolute norm. A damped
Newton step is considered to be converged when the **weighted norm** of the correction
Newton step is considered to be converged when the weighted norm of the correction
vector is less than 1. During the solution, the process of finding and taking a damped
Newton step is repeated until the **weighted norm** of the correction vector is less
Newton step is repeated until the weighted norm of the correction vector is less
than 1, if it is not, then the process continues.

In a multivariate system, different variables may have vastly different magnitudes
Expand All @@ -242,11 +194,11 @@ convergence, making it especially useful in systems with diverse variables.
The weighted norm of the step vector $\mathbf{s}$ is calculated as:

$$
\text{Weighted Norm} = \sum_{n,j} \left(\frac{s_{n,j}}{w_n}\right)^2
\text{Weighted Norm} = \sum_{n,j} \left(\frac{\Delta x_{n,j}}{w_n}\right)^2
$$

where:
- $s_{n,j}$ is the Newton step vector component for the $n$-th solution variable
- $\Delta x_{n,j}$ is the Newton step vector component for the $n$-th solution variable
at the $j$-th grid point.
- $w_n$ is the error weight for the $n$-th solution component, given by:

Expand All @@ -258,7 +210,7 @@ Here:
- $\epsilon_{r,n}$ is the relative error tolerance for the $n$-th solution component.
- $\frac{\sum_j |x_{n,j}|}{J}$ is the average magnitude of the $n$-th solution
component over all grid points, and $J$ is the total number of grid points.
- \( \epsilon_{a,n} \) is the absolute error tolerance for the $n$-th solution
- $\epsilon_{a,n}$ is the absolute error tolerance for the $n$-th solution
component.

#### Interpretation of the Weighted Norm
Expand All @@ -278,12 +230,12 @@ solution components. It can be interpreted as follows:
The Newton iteration is considered converged when the weighted norm is less than 1:

$$
\sum_{n,j} \left(\frac{s_{n,j}}{w_n}\right)^2 < 1
\sum_{n,j} \left(\frac{\Delta x_{n,j}}{w_n}\right)^2 < 1
$$

This criterion indicates that each component of the step vector $s_{n,j}$ is
sufficiently small relative to the expected precision (as defined by the weights
$w_n$).
This criterion indicates that each component of the step vector $\Delta x_{n,j}$ (the
change in each component) is sufficiently small compared to the relative and absolute
error tolerances on that component.

## Transient Solution

Expand Down Expand Up @@ -318,7 +270,7 @@ referred to as differential equations. A general way to express this is by writi
equation above in the following form.

$$
\alpha \frac{x_{n+1} - x_n}{\Delta t} + F_{ss}(x_{n+1})
\alpha \frac{x_{n+1} - x_n}{\Delta t} = F_{ss}(x_{n+1})
$$

Where $\alpha$ is a diagonal matrix with diagonal values that are equal to 1 for
Expand Down Expand Up @@ -352,12 +304,18 @@ Using the expression for the residual equation defined earlier, the Jacobian mat
be written as:

$$
J(x_n, x_{n+1}^{(k)}) = \frac{\alpha}{\Delta t} I +
J(x_n, x_{n+1}^{(k)}) = \frac{\alpha}{\Delta t} +
\frac{\partial F_{ss}(x_{n+1}^{(k)})}{\partial x_{n+1}}
$$

Where $\frac{\partial x_n}{\partial x_{n+1}}$ is zero, and
$\frac{\partial x_{n+1}}{\partial x_{n+1}}$ is the identity matrix.
$\frac{\partial x_{n+1}}{\partial x_{n+1}}$ is the identity matrix. Recalling the
previous definition of the steady-state Jacobian matrix, we can write the transient
Jacobian matrix as:

$$
J(x_n, x_{n+1}^{(k)}) = \frac{\alpha}{\Delta t} + J_{ss}(x_{n+1}^{(k)})
$$

The linearized equation is set to zero to obtain the equation that will be used to send
the residual equation to zero. This equation is:
Expand All @@ -369,7 +327,7 @@ $$
Taking the full expression for the Jacobian and the residual equation, we get:

$$
\left(-\frac{\alpha}{\Delta t} I + J_{ss}(x_{n+1}^{(k)})\right) \Delta x_{n+1}^{(k)} =
\left(-\frac{\alpha}{\Delta t} + J_{ss}(x_{n+1}^{(k)})\right) \Delta x_{n+1}^{(k)} =
-\left( -\frac{\alpha}{\Delta t} (x_{n+1}^{(k)} - x_n) + F_{ss}(x_{n+1}^{(k)}) \right)
$$

Expand Down
Loading

0 comments on commit 06261ed

Please sign in to comment.