You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In PyBaMM, the solver currently stops at all values in the t_eval vector. It seems that we use t_eval and set some dt for a few distinct purposes:
To enforce time-dependent discontinuities within a single step, like with a drive cycle
To set the data collection frequency (as in the period experiment kwarg)
Setting a small dt to force better solver convergence (take smaller time steps)
These three reasons for setting a dt are all valid, but stopping the simulation at all time steps can have a drastic impact on the adaptive time stepping and performance. For example, consider a full C/10 discharge with
a. t_eval = [0, 36000] (i.e., no stops)
b. t_eval = np.arange(0, 36060, 60) (pybamm default)
If we compare the solver stats,
Number of steps: a. 165 vs b. 715
Number of residual calls: a. 288 vs b. 823
Number of linear solver setups: a. 28 vs b. 91
Number of nonlinear iterations: a. 286 vs b. 821
DAE integration time: a. 25.5 ms vs b. 97.1 ms
Even though we solve the same system, the dense t_eval b. is nearly 4x slower! To address these issues, I propose the following changes that align the time-stepping options with Julia's differential equation solvers (see Output Control):
(Breaking) By default, save every t and y determined by IDA's adaptive time stepping algorithm. This eliminates issues like the one above with the C/10 discharge. We can accurately post-interpolate the solution onto specific times with IDA's Hermite interpolator. This is a huge benefit for parameter estimation because we will always obtain the full solution accuracy regardless of the data collection frequency.
(Non-breaking) Change the description of t_eval to match Julia's tstops: "Denotes extra times that the time-stepping algorithm must step to. This should be used to help the solver deal with discontinuities and singularities, since stepping exactly at the time of the discontinuity will improve accuracy." With this option, drive cycles in 1. still work
(Non-breaking) Add a solver option that matches Julia's saveat: "Denotes specific times to save the solution at, during the solving phase. The solver will save at each of the timepoints in this array in the most efficient manner available to the solver." This addresses the data collection frequency in 2. without negatively affecting performance since it interpolates the solution with IDAGetDky().
(Non-breaking) Discourage modifying t_eval for performance issues and encourage modifying appropriate solver options (rtol, atol, dt_max, etc.), which addresses 3.
The text was updated successfully, but these errors were encountered:
By default, save every t and y determined by IDA's adaptive time stepping algorithm. This eliminates issues like the one above with the C/10 discharge.
So this y would be the state matrix stored in the pybamm.Solution class?
We can accurately post-interpolate the solution onto specific times with IDA's Hermite interpolator
are you thinking this would this happen in the pybamm.ProcessedVariable class? You'd need to expose the sundials interpolator via pybind11 then.
So this y would be the state matrix stored in the pybamm.Solution class?
Yes.
are you thinking this would this happen in the pybamm.ProcessedVariable class? You'd need to expose the sundials interpolator via pybind11 then.
In short, yes. I'm still looking into the details, but this is a bit complicated because we may need access to ida_mem in python if we cannot extract IDA's interpolant. Currently, we free ida_mem at the end of each solve, so we cannot do any post-interpolation with IDAGetDky(ida_mem, ...). My thinking is to add a kwarg that lets us retain ida_mem without freeing it and also (for general convenience) add python bindings to all the IDAGetx(), IDASetx(), etc. functions.
proposal by @MarcBerliner (#3910 (comment))
Time stepping
In PyBaMM, the solver currently stops at all values in the
t_eval
vector. It seems that we uset_eval
and set somedt
for a few distinct purposes:period
experiment kwarg)dt
to force better solver convergence (take smaller time steps)These three reasons for setting a
dt
are all valid, but stopping the simulation at all time steps can have a drastic impact on the adaptive time stepping and performance. For example, consider a full C/10 discharge witht_eval = [0, 36000]
(i.e., no stops)t_eval = np.arange(0, 36060, 60)
(pybamm default)If we compare the solver stats,
165
vs b.715
288
vs b.823
28
vs b.91
286
vs b.821
25.5 ms
vs b.97.1 ms
Even though we solve the same system, the dense
t_eval
b. is nearly 4x slower! To address these issues, I propose the following changes that align the time-stepping options with Julia's differential equation solvers (see Output Control):t
andy
determined by IDA's adaptive time stepping algorithm. This eliminates issues like the one above with the C/10 discharge. We can accurately post-interpolate the solution onto specific times with IDA's Hermite interpolator. This is a huge benefit for parameter estimation because we will always obtain the full solution accuracy regardless of the data collection frequency.t_eval
to match Julia'ststops
: "Denotes extra times that the time-stepping algorithm must step to. This should be used to help the solver deal with discontinuities and singularities, since stepping exactly at the time of the discontinuity will improve accuracy." With this option, drive cycles in 1. still worksaveat
: "Denotes specific times to save the solution at, during the solving phase. The solver will save at each of the timepoints in this array in the most efficient manner available to the solver." This addresses the data collection frequency in 2. without negatively affecting performance since it interpolates the solution withIDAGetDky()
.t_eval
for performance issues and encourage modifying appropriate solver options (rtol
,atol
,dt_max
, etc.), which addresses 3.The text was updated successfully, but these errors were encountered: